Hey Vacco, comments inline.|
On 05/04/2012 06:54 PM, Ismael Vacco wrote:
I will like know the opinion under solutions to isolate applications (gears) with cgroups.We have been using this approach on openshift.redhat.com. We essentially put all the processes for each user in their own control group. That allows us to then try different policies to control usage and we've experimented with cpu, memory and net_cls subsystems. This is something we're still experimenting with but it's been pretty handy so far.
Here is a good document for reference:
* Overhead is a problem ?That was an initial concern of ours, but it hasn't been that much of a problem. We still have a lot of investigation to do though to see how different workloads and activity affect the overhead. For example, using CPU limiting might not have much overhead when overall system usage is low, but then overhead might spike when you are seeing a lot of contention for the resources.
All that said, I think it's one of the most practical systems I've seen for managing resource consumption in a while.
* there is a way to predict the amount of gears per server possible considering the overhead ?Primarily we have been making logical guesses based on the amount of resources available and then watch the systems pretty closely. For example, if we have 8GB of memory on the system and we want to use cgroups to limit each application to 512MB, we would leave something like 2GB to the core system and then be able to run 12 applications using 100% of the memory. However, in most cases applications don't use all allocated memory, so you might set an upper bound of something like 50 running applications and migrate them to another machine if memory contention gets too high. Also, it's good to note application specific usage. For example, LAMP applications tend to be pretty light in terms of memory consumption whereas Java applications consume more. All in all, it sort of depends on the workloads and applications you are trying to support.
Hope that helps.