08:52:46 <decauseDevConf> #startmeeting D105-Maxamillion-Containers
08:52:46 <zodbot> Meeting started Fri Feb  5 08:52:46 2016 UTC.  The chair is decauseDevConf. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:52:46 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
08:52:46 <zodbot> The meeting name has been set to 'd105-maxamillion-containers'
08:52:52 <decauseDevConf> #topic Containers
08:53:03 <decauseDevConf> #topic Containers are not new
08:53:32 <decauseDevConf> the original container, chroot
08:53:51 <decauseDevConf> software running in an environemnt where it thinks root resolves to soemthing that isnt'
08:54:04 <decauseDevConf> you allow it to exist in an alternative view, without modiying the software
08:54:15 <decauseDevConf> it has a lot of aspects taht are not sophisticated
08:54:19 <decauseDevConf> lack of copy-on-write
08:54:26 <decauseDevConf> network isolation
08:54:44 <decauseDevConf> the concept of execution runtime in this space
08:54:53 <decauseDevConf> WHo is famliar with micorkernels
08:54:54 <decauseDevConf> ?
08:55:13 <decauseDevConf> the idea has existed for a long time
08:55:40 <decauseDevConf> FreeBSD has been doing a pretty modern enablement for a while now
08:56:01 <decauseDevConf> LXC is where things start to get interesting in my opinion
08:56:08 <decauseDevConf> IBM introduced a set of tools
08:56:13 <decauseDevConf> and poeple created vocabulary for it
08:56:29 <decauseDevConf> it was a way of interacting with the system, and using system level componnents, that drive new tech today
08:56:36 <decauseDevConf> namespaceing in cgropus in the kernel
08:56:44 <decauseDevConf> definign runtime, adding constraints
08:56:52 <decauseDevConf> systemd endspawn containers
08:58:06 <decauseDevConf> in 2014, dotcloud released docker
08:58:14 <decauseDevConf> in 2013, dotcloud released docker
08:58:34 <decauseDevConf> in 2014, coreOS released Rkt
08:58:40 <decauseDevConf> they came up with a spec for a container, and an image
08:58:54 <decauseDevConf> that was the reference implmentation, and many discussions on standardizing on what that all means
08:59:04 <decauseDevConf> 2015 the Open Conatiner project launched
08:59:15 <decauseDevConf> #link http://opencontainers.org
08:59:26 <decauseDevConf> runc is a stand-alone commandline tool to lanuch containers based on this spec
08:59:30 <decauseDevConf> it looks a lot like docker
08:59:43 <decauseDevConf> and to my knowledge, is compatible with docker
08:59:47 <decauseDevConf> #topic Docker
09:00:05 <decauseDevConf> the reason I want to talk is that is pretty much ubiquitous
09:00:10 <decauseDevConf> in Fedora
09:00:22 <decauseDevConf> we're working on delivering some of these things for the sake of community involvement
09:00:26 <decauseDevConf> and delivering new tech
09:00:35 <decauseDevConf> docker daemon has language bindings for diff languages
09:00:40 <decauseDevConf> i'm most fmailair with python
09:00:45 <decauseDevConf> a container is an instance
09:00:50 <decauseDevConf> basically a tarball with metadata
09:01:01 <decauseDevConf> the images are build with a docker file
09:01:13 <decauseDevConf> and I like to reference SELIinux stuff
09:01:24 <decauseDevConf> Dan Walsh pushed SELinux upstream into docker
09:01:33 <decauseDevConf> "containers don't contain" to steal his quote
09:01:51 <decauseDevConf> there are many exmaples of containers redefining application context, it doesn't constrain it
09:02:03 <decauseDevConf> we needed to lower surface for exploitation
09:02:08 <decauseDevConf> #topic Docker File
09:02:14 <decauseDevConf> effectively, you have a base image
09:02:25 <decauseDevConf> you want a maintainer ref, so you can get feedback
09:02:32 <decauseDevConf> each "stanza" is a new layer
09:02:36 <decauseDevConf> and you can share these layers
09:02:45 <decauseDevConf> build time will cache that layer, to be re-used
09:02:54 <decauseDevConf> we can expose ports, ad files directly into the image
09:03:02 <decauseDevConf> so that when the container is launched, it has it in there
09:03:23 <decauseDevConf> one thing this enables, and something that is popular
09:03:33 <decauseDevConf> in a traditional system environment
09:03:40 <decauseDevConf> (considering VM's are traditional now)
09:04:06 <decauseDevConf> you can migrate an application, let's pretend it is written in language X (to avoid religious debate)
09:04:19 <decauseDevConf> and it may not be compatible with the latest version of Fedora
09:04:33 <decauseDevConf> we can take a platform of a F22 based image, and then run that on Fedora23
09:04:45 <decauseDevConf> this enables us to take application runtimes, away from being bound to the host
09:05:03 <decauseDevConf> this allows us to not have to duplicate the Operating system, the filesytem, etc...
09:05:19 <decauseDevConf> containers don't replace virt necessarily, but, you have a new ability to do this now.
09:05:28 <decauseDevConf> #toic Microservices
09:05:37 <decauseDevConf> not sure which is the chicken/egg
09:05:47 <decauseDevConf> containers are the most popular way to deliver them
09:05:55 <decauseDevConf> I like to reference back tot he microkernel ideas
09:06:05 <decauseDevConf> (which are old ideas)
09:06:18 <decauseDevConf> you have small compnonents talking to eachother via IPC, and even across the network
09:06:29 <decauseDevConf> we could geo-disperse, and load balance, and spin them up and down
09:06:34 <decauseDevConf> microkernels can do that
09:06:40 <decauseDevConf> the Amoeba operating systems did this
09:06:50 <decauseDevConf> they ahve the low-level system running anywhwer on the net
09:06:57 <decauseDevConf> take that "up the stack"
09:07:06 <decauseDevConf> for applications, we have siliar parallels
09:07:17 <decauseDevConf> we have tiny components that can get moved around
09:07:23 <decauseDevConf> sysadmins know this via pipes
09:07:42 <decauseDevConf> you can daisey chain your 30-command-one-liners
09:07:52 <decauseDevConf> but that is not the best way to run enterprise
09:08:02 <decauseDevConf> Services are the "unix way" airquotes
09:08:08 <decauseDevConf> "do one thing, do it well"
09:08:23 <decauseDevConf> decoupling tightly coupled compneonents
09:08:36 <decauseDevConf> if they are loosely coupled, you can disperse system compnonents
09:08:42 <decauseDevConf> and then can interact, as if they were local
09:08:55 <decauseDevConf> you can do a loopback network to host it in one environment
09:09:08 <decauseDevConf> now you ahve infra clouds, multizone, multi-tenant environemtns
09:09:17 <decauseDevConf> that will hopefully add resiliance to the service
09:09:33 <decauseDevConf> you can then have smaller compoennets, easy independent tests, and iterate faster, adn get code quality up
09:09:44 <decauseDevConf> we don't sacrifice quality, and get faster development
09:09:51 <decauseDevConf> #topic Immutable INfrastructure
09:09:57 <decauseDevConf> this is a new-ish one
09:10:06 <decauseDevConf> it is effecively fully-automated
09:10:12 <decauseDevConf> minimal human interaction
09:10:24 <decauseDevConf> that isn't just about firing up virt templates, with post-boot task runs, and configmanagement
09:10:32 <decauseDevConf> the goal is being done at deploy time
09:10:35 <decauseDevConf> you don't change it
09:10:52 <decauseDevConf> "dont' configmanagement in the enviroment, do it at build-time."
09:11:02 <decauseDevConf> have these pieces taht can be tested as a cohesive unit
09:11:09 <decauseDevConf> to verify that the thing in testing is in prod
09:11:19 <decauseDevConf> there is a assurance that nothign has changed
09:11:43 <decauseDevConf> unexpected changes, human or cfgmgmt, are easily detected, and avoided
09:11:55 <decauseDevConf> we deploy these artifacts
09:11:59 <decauseDevConf> it can be a container image
09:12:10 <decauseDevConf> a docker file, that runs and does it's thing, youc an distirbute it and run a service
09:12:18 <decauseDevConf> there should be no added requirement for cfgmgmt
09:12:23 <decauseDevConf> you don't need to run it on the end-host
09:12:27 <decauseDevConf> the container image build
09:12:38 <decauseDevConf> the delivery mecahnism is different
09:12:47 <decauseDevConf> yo uhav ea tarball, with everthing you wanted
09:13:00 <decauseDevConf> artifacts can be tested and graduated
09:13:07 <decauseDevConf> it can load diff environments
09:13:12 <decauseDevConf> the image can go through unchanged
09:13:20 <decauseDevConf> some update of library Zed
09:13:29 <decauseDevConf> you have version 1.1 in deve/test/stage
09:13:38 <decauseDevConf> 1.2 has an updated security fix
09:13:42 <decauseDevConf> ops deploys it
09:13:49 <decauseDevConf> something breaks
09:13:57 <decauseDevConf> what is your rollback strategy?
09:14:02 <decauseDevConf> wihtout containers, it is much ahrder
09:14:12 <decauseDevConf> #topic Deployment Models
09:14:26 <decauseDevConf> Let's say oyu're running v1
09:14:28 <decauseDevConf> you wanna update
09:14:34 <decauseDevConf> you have your tests/CI
09:14:37 <decauseDevConf> and you update to 1.2
09:14:40 <decauseDevConf> it tests fine
09:14:46 <decauseDevConf> you roll-out to env
09:14:54 <decauseDevConf> what happens when something breaks on one node?
09:15:04 <decauseDevConf> you can think of a horrible doomsday scenario
09:15:08 <decauseDevConf> power loss
09:15:13 <decauseDevConf> custom rpm triggers
09:15:20 <decauseDevConf> commits to master from a new person
09:15:27 <decauseDevConf> how do you verify compnoents?
09:15:35 <decauseDevConf> how do you know what state your filesystem or kernel
09:15:48 <decauseDevConf> if you lost power
09:15:58 <decauseDevConf> how do you log-into your system
09:16:03 <decauseDevConf> what if we could avoid that
09:16:15 <decauseDevConf> rpm package triggers
09:16:18 <decauseDevConf> ?
09:16:20 <decauseDevConf> here is the docs
09:16:30 <decauseDevConf> #topic RPM Package Manager
09:16:39 <decauseDevConf> each package manager has an order of operatoins
09:16:47 <decauseDevConf> at each step, a side-effect can happen
09:16:54 <decauseDevConf> in an upgrade, this is mutable state
09:17:03 <decauseDevConf> something can happen mid state
09:17:17 <decauseDevConf> #topic Immutable Operating Systems
09:17:22 <decauseDevConf> this is where project atomic comes in
09:17:42 <decauseDevConf> the idea you can have these deployment artifiacts that can be "all or nothing" deployed
09:17:46 <decauseDevConf> it includes newer tech
09:17:52 <decauseDevConf> and it is built on top of traditional tech
09:17:57 <decauseDevConf> we're iterating ont he world of before
09:18:05 <decauseDevConf> it is an upstream project
09:18:16 <decauseDevConf> CentOS and Fedora are working together
09:18:30 <decauseDevConf> being a Fedora person, I"ll talk about our involment, but CentOS is working here as well :)
09:18:45 <decauseDevConf> #topic Fedora Atomic Host
09:18:55 <decauseDevConf> what is changing is the delievery mechanism
09:19:02 <decauseDevConf> there will be some pain in getting folks up to speed
09:19:08 <decauseDevConf> but in a immutable env
09:19:14 <decauseDevConf> you don't want to do package installs on a live system
09:19:26 <decauseDevConf> it is a minimzed footprint
09:19:38 <decauseDevConf> we want it to be the best at running containers
09:19:46 <decauseDevConf> it is easy to do, thanks to OSTrees
09:19:55 <decauseDevConf> that is our new deployment artifact
09:20:03 <decauseDevConf> orchestartion is where kubernetes comes in later
09:20:26 <decauseDevConf> rpm ostree is a root filesystem tree, simliar to git commits
09:20:30 <decauseDevConf> you can revert back and roll forward
09:20:33 <decauseDevConf> it has ref ids
09:20:42 <decauseDevConf> RPM ostrees allow us to build trees from sets of RPMS
09:20:55 <decauseDevConf> you can use software like you have, but then use OSTree to be the build artifact
09:21:19 <decauseDevConf> there is not "we don't know if dracut finished running initramfs"
09:21:30 <decauseDevConf> entire trees can be tested as a single unit
09:22:09 <decauseDevConf> you can go in and inspect atomic hosts to find version strings
09:22:14 <decauseDevConf> #topic Orchestration
09:22:21 <decauseDevConf> you have containers
09:22:26 <decauseDevConf> how do you run across many hosts?
09:22:29 <decauseDevConf> Kubernetes
09:22:34 <decauseDevConf> the few main vocab
09:22:39 <decauseDevConf> pod, service, replication controller
09:22:57 <decauseDevConf> pod - set of containers. scheudles as a unit. They ahve IPC, Net, and they can speak to eachother as if localhost
09:23:08 <decauseDevConf> Service - one or more pods, takes them as a unit
09:23:17 <decauseDevConf> rep controller - manages those services
09:23:26 <decauseDevConf> pluggable options for persistant storage options
09:23:31 <decauseDevConf> #topic Developers
09:23:46 <decauseDevConf> my hope was that developers can apply these to their workflow
09:23:51 <decauseDevConf> OPenShiftOrigins provides this
09:24:11 <decauseDevConf> it is a self-service, for devs to submit code to, in a configurable pipeline
09:24:37 <decauseDevConf> you can then, take that, and run it using openshift, or you can take container out of that environment, and run it directly in a kubernetes env running on atomic
09:25:00 <decauseDevConf> this means you can hav e afully immutable pipeline from dev to production
09:25:04 <decauseDevConf> #topic Q&A
09:25:20 <decauseDevConf> Q: Is a pod always on a single-host in Kubernetes?
09:25:36 <decauseDevConf> A: Yes. To my knowlege, yes. as defined recently-ish
09:26:11 <decauseDevConf> Q: you're including configmagmt in the immutable infra. It seems to me, for things like keys... do you hve a sense of how many poeple are doing this?
09:26:27 <decauseDevConf> Q: Do oyu have much perspecitve on how many people doing cfgmgmt at build?
09:27:04 <decauseDevConf> A: I offer it as a stepping stone. Everything you used to do, you can do at build time now. You can suuply config data to containers. I konw of about half-a-dozen folks who inject in build-time.
09:27:24 <decauseDevConf> Q: The other thing folks use config management for is to build the kubernetes files to begin with.
09:27:46 <decauseDevConf> A: yes. esp if you have much investment into a cfgmgmt solution
09:28:32 <decauseDevConf> Q: Based on the turnout for cfgmgmtcamp, bigger than last year, the consensus was not that cfgmgmt is dead, it just needs to evolve.
09:28:39 <decauseDevConf> #topic Thank you all for your time
09:28:41 <decauseDevConf> #endmeeting