[PetiteCloud] a small development project for someone

Aryeh Friedman aryeh.friedman at gmail.com
Tue Feb 11 15:24:32 PST 2014


An other thing to work on is getting the generated script right for linux
with multi-NIC's (we will handle any changes code side).   You can find the
generated scripts in /tmp/X[install/run].sh where X is the last digit of
the InstanceID (first field ['|' delimed] in
/usr/local/etc/petitecloud/instance.cfg)


On Tue, Feb 11, 2014 at 6:15 PM, Aryeh Friedman <aryeh.friedman at gmail.com>wrote:

> Roughly equiv to our setup but we focus more on the computing side then
> the storage because the N*M problem discussed else where.  We are
> developing on the theory that if PetiteCloud can run on the cheapest
> equipment we could find arranged in one of the most unstable environments
> possible for stable and reliable virtualization and the other stuff needed
> to do cloud computing [aka the living room of a NYC apartment] that if we
> get that right [not just hacked but sound in terms of software engineering
> also and just the right sys/netadmin glue] that given the right "gloss"
> applications we can run clouds of any size [and run circles around every
> other platform]).   In other words the best way to describe our development
> model is a sys/netadmin lab with the refinement of a computer science
> lab.    I am also a Jack-of-All-Trades and am a *VERY* heavy believer in
> the lego model (the *ONLY* reason we even consider petitecloud doable by
> such a small team is the knowledge that 90+% of the problem has already
> been solved by existing OS services and the final 10% is nothing but
> "glue").  Note though it is possible to go to far with the lego model and
> allow combinations that are just a bad idea [I can think of many OpenStack
> examples]. Of course the above might not scale due to some inherent
> performance issues with the native OS tools, that's where things like
> system templates might be useful [fill in the blank scripts for non-default
> behaviors such as setting up a storage abstraction layer].
>
> We have our own plans for how to handle clustering (all the way up to
> between data centers) but due to our belief in not having vaporware we will
> not say much more til we get closer to it (towards the middle to the end of
> spring is my guess).
>
> A few useful things for the immediate future:
>
> 1. We are *NOT* linux people and really need some help in getting the
> install completely right and perhaps in packaging up the binaries for
> distrubition
> 2. A few how-to's would be nice (the topic need not matter to much)
>
>
> On Tue, Feb 11, 2014 at 5:15 PM, Michael Thoreson <m.thoreson at c4labs.ca>wrote:
>
>> Thats fine. I am sure we all appreciate the work Dee and yourself are
>> doing for the project. My coding skills are limited and will do what I can.
>> I however am good with problem solving and most of my experience has been
>> in the "lego" style. Basically finding projects and putting them together
>> to see what works and what kind of new scenarios it solve.
>>
>> This kind of explains my playing around with glusterfs and i love virtual
>> labs so I can "mash" anything I want together and when it gets too mashed
>> up then I just erase the machine and start over :) or in the case with zfs
>> just roll back a snapshot. So with that said if you need me to test
>> different setups let me know. I don't have lots of machines but they are
>> high capacity machines.
>>
>> AMD AM3+ with 8 core 4.4Ghz AthlonFX 16GB DDR3-1600 6x3TB WD Red's RaidZ2
>> currently running FreeNAS but I am in the process of changing that to
>> TrueOS so that I can test PetiteCloud on real hardware.
>>
>> Alienware M18R2 2x1TB HD 16GB DDR3-1600 i7-3630QM 802.11abgn Blutooth
>> USB3 aka the works cause it is Alienware :) running Windows 7x64 Ultimate
>>
>> Supermicro G34 system with 3x256 OCZ SSD Raid 0 2xOpteron 6212's 8 core
>> each 32 GB ECC DDR3-1600 RAM with Dual Intel GB nics.
>>
>> I have 2x8port 1x4port and 1x24 port Gigabit switch, basic ASUS wireless
>> abgn router and an external RAID 10 USB3 HD enclosure with 4x3TB WD Reds. I
>> also have a supermicro intel atom system running pfsense for routing with
>> dual wans.
>>
>> I am also playing around with a number of wan optimization projects and I
>> believe adding it as an advanced function to PetiteCloud would be of great
>> interest to investors and platform adopters as it would allow the cloud to
>> spread across geographically diverse datacenters. But this is for another
>> discussion as there is already enough on the todo list :)
>>
>> Michael Thoreson,
>>
>>
>> On 11/02/2014 3:41 PM, Aryeh Friedman wrote:
>>
>>> Forgot to mention the project
>>>
>>> 1. Make it so the user can decide (advanced options only) if they want
>>> one bridge per tap or one bridge per host
>>> 2. You will likely need to reorganize org.petitecloud.net.NetIface.
>>> reset()
>>> 3. Keep track of what tap goes to what bridge (just same device number
>>> might do it)
>>>
>>> Neither I or Dee will have time for this for a few weeks.
>>>
>>>
>>>
>>> On Tue, Feb 11, 2014 at 4:32 PM, Aryeh Friedman <
>>> aryeh.friedman at gmail.com <mailto:aryeh.friedman at gmail.com>> wrote:
>>>
>>>     Michael is correct it is likely not the best (depending on use
>>>     case) to have all taps on the same bridge.   The general pros and
>>>     cons of each option are:
>>>
>>>     1. If your looking at a set of cooperating instances on the same
>>>     host [see note] then having packets that are only internal to the
>>>     machine require routing (even trivial ip forwarding is routing for
>>>     this discussion) is not DRY (don't repeat yourself). DRY is one of
>>>     the coding standards we strive to meet,  at best and a point of
>>>     failure at worst.
>>>
>>>     2. If your instances are outward facing (typical large
>>>     cloud/provider use case) then it doesn't matter and having one
>>>     bridge per tap is likely more secure (no cross tap snooping)
>>>
>>>     Note: A typical small development firm use case [PetiteCloud
>>>     itself is developed on such a model {we off load testing onto a
>>>     set of test machines but we have a single production machine for
>>>     all of our development and business instances}... also note even
>>>     though we are adding support for external storage and complex
>>>     network configurations we currently rarely need them in our day to
>>>     day non-cloud consulting work].   Our mental model of a typical
>>>     small non-openstack user in the same kind of thing where they only
>>>     need very basic services but they are delivered with complete
>>>     stability and robustness (setup and forget) from a very small set
>>>     of machines in their office.
>>>
>>>     --     Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
>>>
>>>
>>>
>>>
>>> --
>>> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
>>>
>>>
>>> _______________________________________________
>>> petitecloud-general mailing list
>>> petitecloud-general at lists.petitecloud.nyclocal.net
>>> http://lists.petitecloud.nyclocal.net/listinfo.cgi/petitecloud-general-
>>> petitecloud.nyclocal.net
>>>
>>
>> _______________________________________________
>> petitecloud-general mailing list
>> petitecloud-general at lists.petitecloud.nyclocal.net
>> http://lists.petitecloud.nyclocal.net/listinfo.cgi/petitecloud-general-
>> petitecloud.nyclocal.net
>>
>
>
>
> --
> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
>



-- 
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.petitecloud.nyclocal.net/pipermail/petitecloud-general-petitecloud.nyclocal.net/attachments/20140211/f605e0b2/attachment-0003.htm>


More information about the petitecloud-general mailing list