Skip to main content

Solaris 10: Virtual Memory Exhausted [Resolved]

Our group is all programmers and exclusively use Linux or MacOS, but a customer uses Solaris 10, and we need our code to work there. So we scrounged up an old SunFire V240, and a rented Solaris 10 VM to test on.

The code compiles just fine on the VM, but on the SunFire it fails. Our code has a giant autogenerated C++ file as part of the build. It's this huge file that fails to compile. It fails with the message: virtual memory exhausted: Not enough space

I can't figure it out. The SunFire has 8GBs of RAM, and the virtual memory exhaustion happens when the compile reaches just over 1.2GB. Nothing else significant is running. Here are some memory stats near failure:

Using prstat -s size:

SIZE (virtual memory): 1245 MB
RSS  (real memory):    1200 MB

According to echo "::memstat" | mdb -k, lots of memory is still free:

Free (cachelist) is 46%
Free (freelist)  is 26% of total.

All user processes are using about 17% of RAM just before the compile fails. (After the failure, user RAM usage goes down to 2%.) Which is agrees with the other RAM usage numbers. (1.2GB /8.0GB ~= 15%)

swap -l reports that the swap is completely unused.

Some other details:

We're building with g++ 6.1.0, compiled for 64 bit. It fails if we pass the -m64 flag to the compiler or not.

# uname -a 
SunOS servername 5.10 Generic_147440-27 sun4u sparc SUNW,Sun-Fire-V240

Both the VM and the SunFire have system limits set like this:

>ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 10
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 29995
virtual memory          (kbytes, -v) unlimited

(using su)>rctladm -l
...
process.max-address-space   syslog=off     [ lowerable deny no-signal bytes ]
process.max-file-descriptor syslog=off     [ lowerable deny count ]
process.max-core-size       syslog=off     [ lowerable deny no-signal bytes ]
process.max-stack-size      syslog=off     [ lowerable deny no-signal bytes ]
process.max-data-size       syslog=off     [ lowerable deny no-signal bytes ]
process.max-file-size       syslog=off     [ lowerable deny file-size bytes ]
process.max-cpu-time        syslog=off     [ lowerable no-deny cpu-time inf seconds ]
...

We've tried setting the stack size to "unlimited" but that doesn't make any identifiable difference.

# df
/                  (/dev/dsk/c1t0d0s0 ):86262876 blocks  7819495 files
/devices           (/devices          ):       0 blocks        0 files
/system/contract   (ctfs              ):       0 blocks 2147483608 files
/proc              (proc              ):       0 blocks    29937 files
/etc/mnttab        (mnttab            ):       0 blocks        0 files
/etc/svc/volatile  (swap              ):14661104 blocks  1180179 files
/system/object     (objfs             ):       0 blocks 2147483465 files
/etc/dfs/sharetab  (sharefs           ):       0 blocks 2147483646 files
/platform/sun4u-us3/lib/libc_psr.so.1(/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1):86262876 blocks  7819495 files
/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1(/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1):86262876 blocks  7819495 files
/dev/fd            (fd                ):       0 blocks        0 files
/tmp               (swap              ):14661104 blocks  1180179 files
/var/run           (swap              ):14661104 blocks  1180179 files
/home              (/dev/dsk/c1t1d0s0 ):110125666 blocks  8388083 files

Edit 1: swap output after setting up 16GB swap file:

Note: block size is 512

# swap -l
swapfile            dev    swaplo   blocks    free
/dev/dsk/c1t0d0s1   32,25  16       2106416   2106416  
/home/me/tmp/swapfile -    16       32964592  32964592

# swap -s
total: 172096k bytes allocated + 52576k reserved = 224672k used, 23875344k available 

Question Credit: Jim
Question Reference
Asked May 13, 2019
Posted Under: Unix Linux
5 views
2 Answers

It turned out to be a bug in gmake 3.81. When I ran the compile command directly without make, it was able to use much more memory. It seems there was a known bug in 3.80: Something like this. That bug was supposed to be fixed in 3.81. but I was getting a very similar error.

So I tried gmake 3.82. The compile proceeded, and I haven't seen the VM error again.

I was never able to get it to dump core, so I actually don't know what was running out of virtual memory, gmake, g++, or as. It just wouldn't dump core on that error. Nor do I know what the bug really was, but it seems to be working now.


credit: Jim
Answered May 13, 2019
Your Answer
D:\Adnan\Candoerz\CandoProject\vQA