In a context where exceptions are caught, I ran the fragment:
cerr << "allocating" << endl; char* arr[100]; for (int jj = 0; jj < 10; ++jj) { cerr << "jj = " << jj << endl; arr[jj] = new char[2000000000]; sleep (30); } sleep (10); for (int jj = 0; jj < 10; ++jj) delete[] arr[jj]; cerr << "deleted" << endl;
The exception was caught with jj = 1, i.e., on the second allocation.
But on top, I see:
top - 14:08:46 up 5:21, 10 users, load average: 0.04, 0.06, 0.08 Tasks: 158 total, 1 running, 157 sleeping, 0 stopped, 0 zombie Cpu(s): 2.7%us, 2.0%sy, 0.0%ni, 94.5%id, 0.6%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 1941908k total, 1231804k used, 710104k free, 138372k buffers Swap: 3899384k total, 0k used, 3899384k free, 876020k cached
and the "710104k free" suggests that I should have failed on the first allocation. Furthermore, the allocation did not alter the values in top (except for a little jitter).
Wherein do I err?
[~]$ uname -a Linux xxxxx 2.6.18-194.32.1.el5 #1 SMP Wed Jan 5 17:53:09 EST 2011 i686 i686 i386 GNU/Linux
Thanks, Mike.
Michael D. Berger wrote:
In a context where exceptions are caught, I ran the fragment:
cerr << "allocating" << endl; char* arr[100]; for (int jj = 0; jj < 10; ++jj) {
<snip>
Wherein do I err?
It would have been caught on 0 if that was jj++, *not* ++jj (increment *after* the loop, not before).
mark
On Thu, 03 Mar 2011 14:34:13 -0500, m.roth-x6lchVBUigD1P9xLtpHBDw wrote:
Michael D. Berger wrote:
In a context where exceptions are caught, I ran the fragment:
cerr << "allocating" << endl; char* arr[100]; for (int jj = 0; jj < 10; ++jj) {
<snip> > Wherein do I err?
It would have been caught on 0 if that was jj++, *not* ++jj (increment *after* the loop, not before).
mark
I believe that this is incorrect. Any item in the third position of a for(;;) is executed after the body of the loop. In this case ++jj and jj++ don't make any difference (except that perhaps ++jj is a little faster). In any case, the delays observed indicate that there was one allocation.
Mike.
centos-bounces@centos.org wrote:
In a context where exceptions are caught, I ran the fragment:
cerr << "allocating" << endl; char* arr[100]; for (int jj = 0; jj < 10; ++jj) { cerr << "jj = " << jj << endl; arr[jj] = new char[2,000,000,000]; // This line changed by me to
underscore what you're doing.
sleep (30);
} sleep (10); for (int jj = 0; jj < 10; ++jj) delete[] arr[jj]; cerr << "deleted" << endl;
The exception was caught with jj = 1, i.e., on the second allocation.
But on top, I see:
top - 14:08:46 up 5:21, 10 users, load average: 0.04, 0.06, 0.08 Tasks: 158 total, 1 running, 157 sleeping, 0 stopped, 0 zombie Cpu(s): 2.7%us, 2.0%sy, 0.0%ni, 94.5%id, 0.6%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 1941908k total, 1231804k used, 710104k free, 138372k buffers Swap: 3899384k total, 0k used, 3899384k free, 876020k cached
and the "710104k free" suggests that I should have failed on the first allocation. Furthermore, the allocation did not alter the values in top (except for a little jitter).
Wherein do I err?
Holy RAMbo, batman! How many GB of RAM do you intend to allocate? Once you allocate 2GB like you did, you MUST be running a bigmem or x64 kernel to allocate another 2GB.
You won't see 'new'd memory as "taken" in top(8) because malloc() is a bug. Bits are set in a page table, but no memory is actually written to, and nothing really changes, until the program attempts to write. What you did was fall off the end of a bit string keeping track of malloc()'d pages, that's all.
(ps mark, the ++jj or jj++ takes place after the first loops' action, not before .:. ++jj and jj++ have identical effect).
Insert spiffy .sig here
//me ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On Thu, 03 Mar 2011 14:38:52 -0500, Brunner, Brian T. wrote:
[...]
Holy RAMbo, batman! How many GB of RAM do you intend to allocate? Once you allocate 2GB like you did, you MUST be running a bigmem or x64 kernel to allocate another 2GB.
You won't see 'new'd memory as "taken" in top(8) because malloc() is a bug. Bits are set in a page table, but no memory is actually written to, and nothing really changes, until the program attempts to write. What you did was fall off the end of a bit string keeping track of malloc()'d pages, that's all.
(ps mark, the ++jj or jj++ takes place after the first loops' action, not before .:. ++jj and jj++ have identical effect).
Insert spiffy .sig here
[...]
Yes, I do expect to do a bit of arithmetic. I will need several blocks of about 0.5G, and I am checking the limits. Is it true, then, that I won't really know if I succeeded with the allocation until I try to write the memory? What will happen then? Is there a way to check without actually writing?
Mike.
On Thu, Mar 03, 2011 at 07:55:57PM +0000, Michael D. Berger wrote:
Yes, I do expect to do a bit of arithmetic. I will need several blocks of about 0.5G, and I am checking the limits. Is it true, then, that I won't really know if I succeeded with the allocation until I try to write the memory? What will happen then? Is there a way to check without actually writing?
/proc/sys/vm/overcommit_memory (or sysctl vm.overcommit_memory)
From the kernel Documentation:
This value contains a flag that enables memory overcommitment.
When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory.
When this flag is 1, the kernel pretends there is always enough memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory.
This feature can be very useful because there are a lot of programs that malloc() huge amounts of memory "just-in-case" and don't use much of it.
The default value is 0.
Mike.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, 03 Mar 2011 15:03:34 -0500, Stephen Harris wrote:
/proc/sys/vm/overcommit_memory (or sysctl vm.overcommit_memory)
From the kernel Documentation:
This value contains a flag that enables memory overcommitment.
When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory.
When this flag is 1, the kernel pretends there is always enough memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory.
This feature can be very useful because there are a lot of programs that malloc() huge amounts of memory "just-in-case" and don't use much of it.
The default value is 0.
I just wrote a sequence of values (kk % 256) and (after changing to unsigned char) read back successfully. I did see some action in top.
Now given my numbers, it would seem that I am "overcommitted". Leaving the flag you mention at 0 (which it is), do I run a risk of a later failure?
Thanks, Mike.
On 3/3/2011 2:26 PM, Michael D. Berger wrote:
On Thu, 03 Mar 2011 15:03:34 -0500, Stephen Harris wrote:
/proc/sys/vm/overcommit_memory (or sysctl vm.overcommit_memory)
From the kernel Documentation:
This value contains a flag that enables memory overcommitment.
When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory.
When this flag is 1, the kernel pretends there is always enough memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory.
This feature can be very useful because there are a lot of programs that malloc() huge amounts of memory "just-in-case" and don't use much of it.
The default value is 0.
I just wrote a sequence of values (kk % 256) and (after changing to unsigned char) read back successfully. I did see some action in top.
Now given my numbers, it would seem that I am "overcommitted". Leaving the flag you mention at 0 (which it is), do I run a risk of a later failure?
Since there are other processes running, each one usually don't know how overcommitted the system is - or might be soon (what if several copies of this program are running?). And it's all virtual so you aren't really out of memory until you are out of swap. Then the kernel will kill something at random.
On Thu, 03 Mar 2011 15:03:34 -0500, Stephen Harris wrote:
On Thu, Mar 03, 2011 at 07:55:57PM +0000, Michael D. Berger wrote:
[...]
/proc/sys/vm/overcommit_memory (or sysctl vm.overcommit_memory)
From the kernel Documentation:
This value contains a flag that enables memory overcommitment.
When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory.
When this flag is 1, the kernel pretends there is always enough memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory.
This feature can be very useful because there are a lot of programs that malloc() huge amounts of memory "just-in-case" and don't use much of it.
The default value is 0.
It appears that option 2 would be the best for me, so I set: sysctl vm.overcommit_memory=2
However, it resets to 0 on reboot, and only root can reset it. It would be good if it would be set to 2 on reboot. Is there a good way to do this? I suppose I could put something in /etc/init.d/ if there is no better way.
Thanks, Mike.
On Fri, Mar 04, 2011 at 12:52:51AM +0000, Michael D. Berger wrote:
It appears that option 2 would be the best for me, so I set: sysctl vm.overcommit_memory=2
However, it resets to 0 on reboot, and only root can reset it. It would be good if it would be set to 2 on reboot. Is there a good way to do this? I suppose I could put something in /etc/init.d/ if there is no better way.
Just add it to /etc/sysctl.conf
It appears that option 2 would be the best for me, so I set: sysctl vm.overcommit_memory=2
However, it resets to 0 on reboot, and only root can reset it. It would be good if it would be set to 2 on reboot. Is there a good way to do this? I suppose I could put something in /etc/init.d/ if there is no better way.
man sysctl.conf (file: /etc/sysctl.conf)
I'm betting vm.overcommit_memory=2 is the line you want to add to /etc/sysctl.conf.
This is a bet that "sysctl -p" is somewhere in your /etc/rc.d/rc.sysinit
Insert spiffy .sig here
//me ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
centos-bounces@centos.org wrote:
Yes, I do expect to do a bit of arithmetic. I will need several blocks of about 0.5G, and I am checking the limits. Is it true, then, that I won't really know if I succeeded with the allocation until I try to write the memory? What will happen then? Is there a way to check without actually writing?
That's where the malloc() bug can hit you. The bits I spoke of are only in the table of pages of your process' virtual memory. That's limited to 4GB without some mechanism getting past the 32-bit-addressing-snag. If you use calloc(), you discover at page-allocation-time that either your virtual space, or the system virtual space, is exhausted; this is preferable to discovery in the nth iteration of some deeply-buried loop that wrote to the 'one page too many'.
Stephen: Good clue on the overcommit_momory setting. I'd forgotten it.
Can you work on only one file at a time, each file under GOB, or do you must-have more than GOB of data in front of you at a time?
Insert spiffy .sig here
//me ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On Mar 3, 2011, at 2:21 PM, "Michael D. Berger" m_d_berger_1900@yahoo.com wrote:
In a context where exceptions are caught, I ran the fragment:
cerr << "allocating" << endl; char* arr[100]; for (int jj = 0; jj < 10; ++jj) { cerr << "jj = " << jj << endl; arr[jj] = new char[2000000000]; sleep (30); } sleep (10); for (int jj = 0; jj < 10; ++jj) delete[] arr[jj]; cerr << "deleted" << endl;
The exception was caught with jj = 1, i.e., on the second allocation.
In 32-bit a process can only allocate 3GB, 1GB is reserved for the kernel.
I don't know what your trying to do, but I'm sure your could do it a whole lot better the pre-allocating all available memory.
Don't go by top it only shows committed memory, not reserved.
-Ross