15:59:51 #startmeeting Server Working Group Weekly Meeting (2014-01-14) 15:59:51 Meeting started Tue Jan 14 15:59:51 2014 UTC. The chair is sgallagh. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:51 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:59:56 #chair sgallagh mizmo nirik davidstrauss Evolution adamw simo tuanta mitr 15:59:56 Current chairs: Evolution adamw davidstrauss mitr mizmo nirik sgallagh simo tuanta 16:00:00 #topic roll call 16:00:05 .hellomynameis sgallagh 16:00:11 sgallagh: sgallagh 'Stephen Gallagher' 16:00:13 .hellomynameis adamwill 16:00:14 adamw: adamwill 'Adam Williamson' 16:00:20 .hellomynameis tuanta 16:00:24 tuanta: tuanta 'Truong Anh Tuan' 16:00:26 .hellomynameis kevin 16:00:34 nirik: kevin 'Kevin Fenzi' 16:00:40 I'm here, but a bit distracted prioritizing the morning fires... 16:01:07 .hellomynameis davidstrauss 16:01:10 davidstrauss: davidstrauss 'David Strauss' 16:01:22 .hellomynameis duffy 16:01:24 mizmo: duffy 'Máirín Duffy' 16:04:41 Hello all 16:04:46 morning. 16:05:12 ok, I think that's everyone 16:05:23 oh, wait. tuanta: you around? 16:05:35 yes 16:05:46 i sent my hello :) 16:05:53 Ah, I missed it. Sorry 16:06:04 #topic PRD Draft 16:06:30 So, we're 99% complete on the PRD draft. I see three things remaining that we should make into #topic for this meeting: 16:06:42 1) Positioning relative to Fedora Cloud 16:06:54 2) Open questions for the Env/Stacks WG 16:07:01 3) One FIXME left in the draft 16:07:13 #link https://fedoraproject.org/w/index.php?title=Server/Product_Requirements_Document_Draft 16:07:53 I hope that all members of the WG have read the current draft by now? 16:08:13 I personally see us as downstream to Env/Stacks and upstream to Cloud. 16:08:33 #topic Positioning relative to Fedora Cloud 16:08:41 have you asked them? 16:08:43 I had a conversation with the Fedora Cloud folks a few minutes ago 16:08:58 I can summarize and/or pastebin the conversation. 16:09:22 sgallagh: Please do 16:09:30 please summarize 16:09:37 the big question I have is if we are expected to produce cloud server stacks, or if they are. 16:09:52 Full log: http://fpaste.org/68315/97157761/ 16:10:33 Summary: Fedora Server is intended to handle traditional infrastructure and the bare-iron portion of IaaS, while Cloud will handle instances within an XaaS infrastructure. 16:10:56 nirik: Could you define "cloud server stacks" please? 16:11:26 XaaS = PaaS or IaaS? 16:11:48 ie, will there be a 'fedora server openstack "featured application stack" that we produce, test and distribute? 16:11:49 mizmo: Or DBaaS, but yet 16:11:52 *yes 16:11:53 I disagree with Cloud owning "instances/containers/PaaS" because I joined this group to work on the latter two in Fedora. 16:12:09 or will the cloud folks produce that 16:12:11 this area feels pretty weird. i don't understand why we have a Cloud WG / planned cloud "product" at all. Cloud isn't a product, it's an environment. 16:12:28 There is nothing inherently more "cloud" than "server" about containers. 16:12:56 in five years 'Fedora Cloud' will sound like 'Fedora for Workgroups'. 16:13:04 heh 16:13:18 davidstrauss: I *think* they're suggesting that we're responsible for the "hypervisor" side of things (adapt as appropriate for containers instead of virt) 16:13:24 If anything, containers make more sense at the base machine/hypervisor layer because that's where resource divisions occur (versus on cloud where you generally have more size flexibility) 16:13:59 adamw: worse, 'for Workgroups' had actual meaning, 'Fedora Cloud' has none 16:14:23 Cloud owning "instances" - definitely; containers - split host/guest; PaaS is somewhat weird 16:14:26 hi 16:14:31 sgallagh: you rang? 16:14:49 Yeah, I wanted to have some of the Cloud WG chime in here 16:14:51 mitr: that's close to my understanding 16:14:53 PaaS is generally a superset of containers 16:14:59 let me get you a traceback 16:15:23 jzb: http://fpaste.org/68318/71611613/ 16:15:31 as long as interested parties coordinate I don't much care who "owns" something... 16:15:41 Now, I'm happy to hand cloud containers and PaaS assuming I get an opportunity to switch groups. 16:15:48 adamw: anyway I see "Fedora cloud" just as an unfortunate name, the actual work being done sounds very useful, it is just that the names muddies it all 16:15:51 nirik: "owning" is the least resource-intensive way to coordinate 16:16:02 I don't care which group gets it so much as that it's where I'm working. 16:16:12 davidstrauss: Voting membership on one group doesn't restrict participation in others. 16:16:31 mitr: even if 'owned' it still need coordination with other groups... 16:16:41 nirik: i just worry that we seem to have designed an organizational split precisely halfway up what seems to be a set of plans to build some integrated stacks, and that seems like an odd design choice. 16:16:49 I mean you fire up a cloud instance in your cloud and what do you want to do? run a server on it, right? 16:17:02 nirik: indeed 16:17:03 sgallagh: Yes, but my goal would be to have voting membership in the group working on what I care about. Containers and PaaS are reasons I listed for wanting to be in the Server WG. 16:17:26 adamw: yeah. :( 16:17:35 but I thought cloud would bother with building optimized guest images for specific workloads and the integration with openstack or similar frameworks 16:17:54 is cloud going to do the same work we do with specific roles implemented, just differently ? 16:18:04 sgallagh: I'm also involved in a lot of the container integration and engineering we're doing with systemd. 16:18:12 davidstrauss: my understanding has always been that server == what runs applications and cloud environments, aka the infrastructure 16:18:28 davidstrauss: cloud == the instances and platforms on top of that infrastructure 16:18:46 jzb: None of that was even discussed until the groups got formed. 16:18:50 jzb: can you give me an example of what will be your first product ? 16:18:58 adamw: ... actually, I'll take back PaaS being "weird"; PaaS being owned by Cloud makes perfect sense - they own all the integration (and note that the relevant split isn't host/guest, but "infrastructure to deploy the app, that infrastructure may not even be running locally (e.g. a web app)" + all the low-level things necessary to make that infrastructure work 16:19:02 jzb: Sure, but it does beg a certain number of questions about what the Fedora Cloud will "run". 16:19:11 My interpretation was Fedora Cloud == guest OS optimization and images 16:19:39 Are you only interested in customer apps, or is there an expectation that you'd want to be running the Server Roles in there? 16:19:52 jzb: do you mean you see Fedora Server in charge of integrating with the Openstack framework for example ? 16:19:57 And my interpretation had the precedence of what "Fedora Cloud" had become as of F19/20 16:20:35 simo: the cloud images 16:20:41 cloud.fedoraproject.org -> downloadable cloud images 16:20:48 jzb: your answer is like "the server images" 16:20:55 Why would I assume the Cloud WG would have a different scope? 16:20:59 jzb: it means little or nothing if I do not know what's in the images 16:21:24 simo: the first image is just a customized Fedora that is suited for public + private clouds 16:21:38 Also, at other projects like Ubuntu, the cloud images are very lightly modified derivatives of the server product. 16:21:49 jzb: so you would install fedora server in the image, and just customize some settings for a specific cloud provider ? 16:21:59 davidstrauss: Except for CoreOS, which I think is closer to what Fedora Cloud is targeting 16:22:33 simo: not necessarily install fedora server, more like I might install a specific set of packages 16:22:49 simo: or I might not install any new packages, but might pull in Ruby gems or something 16:22:56 ok if that is the case I am not sure I want to participate in the biggest waste of tiem and resources ever 16:23:19 if we as aproject are going to to all the work twice I think I will call pout 16:23:19 simo: Could you clarify that? 16:23:19 sgallagh: I don't see why the approach of CoreOS is more germane to installation on cloud VMs than bare servers, in the first plce. 16:23:21 place* 16:23:36 sgallagh: I am not interested to see multiple projects do the same work multiple times 16:23:42 simo: where's the duplication? 16:23:48 so...we're looking at a future where we have a Cloud group producing a Cloud product that is a basic environment tweaked for cloud deployment and saying 'Put this on your cloud instance and deploy your stack on it!', and a Server group producing Server products that sound like they could well include integrated stacks suitable for deployment appliance-style to...cloud instances? this seems like a slightly odd message. 16:24:02 CoreOS is closer to a super-lightweight hypervisor than anything else. 16:24:14 adamw: One thing that is completely missing is site-wide infrastructure like OpenShift 16:24:24 jzb: our work in the server wg is to build integration in packages and have roles an stuff that works ofr a server 16:24:43 if you throw all that out and redo it by picking and changing individual packages we are duplicating work 16:24:50 simo: I suppose we'll want the Server Roles to be useful enough that they will be deployed into Cloud as is. 16:24:58 mitr: of course 16:25:04 simo: I don't think that's what I said. 16:25:07 that is why I do not want to do this work if it isn;t 16:25:12 simo: Well, our approach will certainly be more useful for classic infrastructure 16:25:16 For "individual gems", we obviously can't do that much with Roles, or in Server generally - but we do need to agree on an approach 16:25:22 While theirs is more attuned to loading individual apps 16:25:35 Things like the latest Ruby app that isn't packaged (or packageable0 16:25:37 sgallagh: It's not obvious to me that the Roles are more useful for non-cloud images 16:25:43 There's almost nothing except drivers, auto-config, and maybe some kvm/xen hypervisor stuff that should distinguish what we deliver for cloud VMs versus bare metal. 16:25:47 I didn't say they weren't useful for both 16:25:52 sorry 16:26:03 I said they were useful for classic infrastructure, which may be physical or virtual machines 16:26:07 * nirik was expecting our roles to be installable on any fedora 'thing' 16:26:14 * davidstrauss too 16:26:14 nirik +1 16:26:14 simo: have you looked over the current cloud PRD? 16:26:18 simo: https://fedoraproject.org/wiki/Cloud_PRD 16:26:19 sgallagh: to me it seem they will just concentrate on other roles 16:26:28 or *should* 16:26:28 Whereas the Fedora Cloud minimalist approach makes sense for rapid development of apps 16:26:37 davidstrauss: right, i think the weirdness here is that we have defined the WGs in a way that it looks like they're doing the same thing and working towards the same goal (producing complementary ranges of Fedora 'products'), while in this area that's really not how things align 16:26:41 but the way it is explained seem to be that that will not happen 16:26:52 and we will not be able to use the work they do for fedora server 16:27:21 nirik: I want that to be true, but the Fedora Cloud default install seems likely to be quite a bit smaller and may not have all that the roles need 16:27:27 to me, "cloud" is an environment, not a product, as i said earlier. our products should work in the cloud environment, but 'fedora for the cloud' really isn't a product per se. 16:27:28 would the cloud images be like a 'rawhide' for roles, e.g., the hand tweaked stuff ends up being packaged as a role? 16:27:36 sgallagh: they should then pull what they need from the net? 16:27:48 jzb: point is, will I be able to install a Fedora Server role in an image ? 16:27:50 adamw: Agreed. 16:27:53 if not why not ? 16:27:56 simo: I don't see why not 16:28:01 simo: Looking at the Cloud PRD use cases, there is very little overlap with what we can pre-package as Roles 16:28:36 I understand the focus towards different personas 16:28:39 that's not the point 16:28:55 my main point is whether these projects are able to cross over or not 16:29:02 not that the default installs need to 16:29:11 simo: Yes, they will be able to cross over 16:29:17 As long as we continue using RPM... (please let's not open that conversation) 16:29:23 but it is important for me not to fragment stuff so that packages from one will be incompatible with the other 16:29:36 mitr: rpm is not sufficiente 16:29:42 packages themselves must not diverge 16:29:43 One suggestion I was planning to make during execution planning was that there should be a metapackage in the repository that defines (with strict Requires) what is the Fedora Server platform 16:29:51 fedora cloud should NOT be a different distro than Fedora server or Fedora workstation. 16:30:02 A cloud image should then be simply able to run 'yum install fedora-server-release' and be fully compliant 16:30:03 nirik: my point exactly 16:30:26 nirik: where did anyone suggest that it would be? 16:30:30 * jzb is puzzled. 16:30:49 jzb: I was answering the line of questioning about continuing to use rpm, etc. 16:30:56 Outside of the default install set, they should REALLY all be using the same package repositories 16:30:57 jzb: the scope is unclear enough that that was the sensation/conclusion multiple of were driven to 16:31:02 nirik: Some of the things you need on a physical server (some monitoring, inventory/querying etc.) work better for VMs/containers when done from the outside, so there is some basis for being different (not in the runtime and APIs, though) 16:31:10 If we ever implied otherwise, I missed it and would have argued loudly. 16:31:11 jstherese no need for someone to state something if things are confusing enough 16:31:12 simo: s/driven to/ went on a joyride to/ ;-) 16:31:25 mitr: sure, but thats not the same as producing multiple distros. 16:31:29 jzb: quite possibly, which is why we called you in to help us understand 16:31:55 simo: I disagree with that 16:31:55 anyhow, so, based on this... it's server's responsibility to produce say an 'openstack featured role' ? 16:32:06 mitr: the difference is mostly in configuration 16:32:07 (ie, the hypervisior/server part) 16:32:09 simo: we should start from the assumption that all the WGs are trying to work from the same base/distro 16:32:12 sometimes you choose a different tool 16:32:17 if we stray from that, it would need to be explicitly stated. 16:32:21 nirik: I'd say probably yes. 16:32:23 (and then it'd get shot down...) 16:32:39 jzb: We're supposed to be working from that assumption 16:32:41 jzb: yup, that is something that is unclear 16:32:44 That's why the Base Design WG exists 16:32:49 ok, I'm fine with that, except that I would hope we get a lot of help from interested cloud wg folks too. ;) 16:32:54 simo: I don't see why it's unclear. 16:32:55 jzb: I keep having the impression some people really want to split repositories 16:33:01 nirik: Of course. 16:33:01 simo: where are you getting that? 16:33:27 'impression' 16:33:30 simo: Shall we just call a quick vote to see if anyone actually supports that? 16:33:42 sgallagh: separate repositories you mean ? 16:33:53 simo: I think the problem is that we are all still in handywavy high level buzword land 16:34:00 WG Vote: Should the products have separate repositories (outside of default install repos)? 16:34:01 nirik: we are 16:34:03 -1 16:34:07 -1 16:34:08 so, people see that and think... "hey, we can redesign the entire univese!" 16:34:12 sgallagh: Practically it doesn't matter, we can always have a single repository with bash-server and bash-cloud, and different images isntalling different packages 16:34:19 but our vote does not matter unless cloud and desktop WG have the same vote 16:34:21 -100000000000 16:34:37 -1 at least initially. 16:34:38 -1 16:34:39 -1 16:34:44 simo: I'd like if you'd clarify for me where this "impression" came from. 16:34:56 and, fwiw, I'm -1 on having a different repo. 16:35:03 but this is a design decision, so it's hard to say much more without any actual pros/cons 16:35:21 (-1 about separate repo) 16:35:25 jzb: some of the objectives flaunted by people around are so conflicting it would be the only option, but I think it was actually discussed before the break and someone was in favor 16:35:25 I mean, do we vote on if we use rpm? :) lets move on? or was there more on this? 16:35:28 I'm leaning towards +1 for a separate repos - I'd rather have (rpm -Va) returning nothing on a pristine system than different products using the same packages but modifying them post-RPM-install 16:35:49 jzb: but I do not have clear references for you, that's why I said 'impression', can be (and I hoipe is) completely wrong 16:36:06 Anyway, separate repos and multiple packages ina single repo are equivalent 16:36:07 mitr: I think that could be better solved with product-specific subpackages, frankly 16:36:14 There are technical ways to avoid that, in any case. 16:36:25 We're getting in the weeds here. 16:36:37 simo: I believe that you may have been misinformed or there was a miscommunication if that's the impression you've had. 16:36:51 simo: Given the responses here, I think we can at least say that this group is largely aligned on "not splitting" and move on 16:36:56 jzb: look at mitr, he says he is in favor for multiple repos, so I am not just imagining things 16:37:25 simo: "separate repos and multiple pakcages in a single repo are equivalent" 16:37:25 but it seems a minority for now 16:37:34 we're touching the limits of package management, and we won't have any satisfactory solution without rethinking it 16:37:43 mitr: what use case is calling for modifying RPMs post-install? 16:37:50 mitr: I must have missed this. 16:37:57 mitr: I want neither if separate packages in the same repo means hard conflicts in most cases 16:38:12 jzb: Different default configuration 16:38:34 number80: it's not really package manegment, it really is things like: one WG has this dependency built into package Foo and this other does not 16:38:44 it means the bits differ and they become mutually exclusive 16:38:45 mitr: such as? 16:38:59 jzb: the above-mentioned monitoring from inside/outside the guest 16:39:17 fragmenting our excellect collection of packages for short term convience is a bad idea. 16:39:42 simo: Server Role upgrades will also probably run into limitations of the RPM model ("Before you update this database I need to install and run this migration tool for you") 16:39:56 nirik: esp since maintainers are not going to build 2/3x the packages and fix bugs for each flavor of the day 16:40:12 except for some core packages it would man gratuitous balkanization 16:40:13 * sgallagh notes that we're WAY off-topic from the immediate need: the PRD 16:40:19 sgallagh: +1 16:40:21 sgallagh: +1 16:40:22 simo: I expect _that_ to be the major reason not to worry splititng the repo will be prevalent 16:40:28 simo: we don't have the manpower to handle a gazillion of variant for each packages 16:40:37 can we agree that we've agreed no separate repos? 16:40:40 number80: exactly 16:40:41 My main worry about Cloud owning stuff like Docker is limiting our ability package apps and services the way we might want to -- and controlling that part of the OS. 16:40:45 jzb: +1 16:40:47 jzb: That was my count, yes 16:40:51 jzb: +1 16:40:58 and I will continue with that understanding 16:41:17 davidstrauss: can you give a concrete example, pls? 16:41:21 davidstrauss: As I said "owning" really means "has responsibilty for", not "has exclusive rights on" 16:41:31 sgallagh: +1 16:42:16 jzb: Docker offers its own repo model to deploy services in a non-conflicting, integrated way within containers. From some perspectives, it's an alternative to RPM, just container-centric. 16:42:17 Ok, so back on topic 16:42:30 I think we answered the first question in the PRD: use-case 8 16:42:35 err, 7. Sorry 16:42:50 It *is* in our charter to manage an OpenStack Node. 16:43:08 sgallagh: I'm just saying it may be troublesome if Cloud "has responsibilty for" Docker, but we use it for apps to create an OS that Cloud goes back and makes images for. 16:43:42 sgallagh: which is sorta troublesome, it is still a lot of work to deal with openstack stuff :) 16:43:45 davidstrauss: Ownership is negotiable. Right now, that group happens to have most of the knowledgeable people working on it, so it's just as easy to leave with them 16:44:05 simo: This is our design, not our immediate implementation plan :) We'll get there. 16:44:17 So the remaining Cloud question is Use Case 8 16:44:20 sgallagh: do we have any voting member that is involved directly with openstack or other cloud infrastructure stuff ? 16:44:24 Which touches on davidstrauss' docker questions 16:44:56 simo: Not at present (to my knowledge) 16:44:59 simo: would CloudStack count? 16:45:12 jzb: Who are you referring to? 16:45:12 sgallagh: problem is if we do not have anyone from that background we may fail at building a compatible infrastructure in the server requiring sudden changes in direction later 16:45:26 I'm a systemd committer. systemd is the primary foundation of CoreOS (more so than Docker) and has deep integration with libvirt. Also, it has its own container system. 16:45:34 jzb: any cloud related infrastructure involved person would count 16:45:37 Well, we don't necessarily need a voting member there, just someone we consult regularly 16:45:45 sgallagh: I'm asking simo if Apache CloudStack counts as "cloud infrastructure stuff" 16:45:58 jzb: ie someone that would recognize if a general level choice would conflict with successful cloud infrastructure deployment 16:45:59 for a featured role, we just need interested folks working on it... 16:46:17 I've added tools to systemd specifically for CoreOS use cases. 16:46:22 ie, I hope we don't expect all are featured stacks need the voting members only to work on them... that doesn't scale. 16:46:50 simo: ah, OK. 16:46:58 nirik: no, but instaling cloud infrastructure is not an easy task 16:47:00 nirik: Right, but at the moment we have "OpenStack Node" as a primary use-case. Perhaps you're suggesting we move it to a proposed Role and drop it from our charter? 16:47:09 I'm in the process of building out three separate OpenStack labs...I'll have a bit of a clue 16:47:23 if you know what triple_O is you understand, if you do not know, you may miss why it is important to have someone from that community 16:47:26 danofsatx: Your input would be invaluable 16:47:37 sgallagh: what do you mean by 'node' there? 16:47:42 danofsatx: good to know 16:47:51 "Provide a platform for acting as a node in an OpenStack rack." 16:48:05 We also deploy to bare metal and cloud VMs for my company's infrastructure. 16:48:08 ok, so compute node, or head node... 16:48:15 a node is usually thought as the hypervisor node 16:48:26 simo: Perhaps we should clarify that in the text 16:48:30 there are other nodes that may act in different roles (ie not run kvm) 16:48:48 Our bare metal is through PXE boots and Kickstart. Our cloud deployments come from images. We run identical software on them after that point. 16:48:48 right. 16:48:49 like keystone (in openstack terms) 16:48:52 Anyway, let's not get too detailed here. The main question is: "Do we consider this a use-case for our charter, or move it to a proposed Role"? 16:49:00 sgallagh: What "Platform" word again - what does it _do_? Is that "kernel+libc to install a node"? 16:49:01 or schedulers or neutron etc ... 16:50:13 mitr: I'm really starting to think this should just be a Role and leave its definition to whomever takes on that Role implementation. 16:50:16 sgallagh: Is OpenStack _that_ special for the design that it needs extra consideration? 16:50:22 I guess I don't care if it's a use case or a role or both. Much of this process is too abstract for me. ;) 16:50:45 sgallagh: That's what I default to as well (assuming the difficulties with deploying OpenStack are OpenStack--derived and not from weird assumptions about the underlying OS) 16:51:16 mitr: There may be some kernel needs, but I think that's workable 16:51:41 Proposal: remove use-case 7, move OpenStack Hypervisor Node reference to proposed Roles list. 16:51:46 Please vote 16:52:05 (I really want to get to the other topics; we're taking too long on this) 16:52:54 +1 16:53:13 +1 16:53:17 +1 16:53:19 +1 16:53:48 sure +1 16:53:54 * nirik shrugs 16:53:59 +1 16:54:00 +1, sure. 16:55:00 #agreed remove use-case 7, move OpenStack Hypervisor Node reference to proposed Roles list 16:55:30 The remaining cloud-related question is use-case 8: 16:55:40 "Users must be able to create, manipulate and terminate large numbers of containers using a stable and consistent interface." 16:56:30 I expect that this *does* have clear assumptions about the underlying OS, and therefore should remain our responsibility. 16:56:43 +1000 16:56:55 yes 16:57:05 why is it part of server and not 'base', in that case? 16:57:07 it has to be intimately tied to the OS 16:57:24 adamw: that's a very good question 16:57:52 adamw: perhaps because Workstation doesn't really need it 16:58:05 maybe it is a cooperation, I am not sure base is interested in building/packaging/integrating the tools to manage the containers though 16:58:14 mitr: who knows ? 16:58:30 mitr: IIRC gnome wants to use containers too in some fashion in the future 16:58:41 Well, the kernel side of things will have to be in Base, but I think the management of it belongs to us 16:58:43 simo: Not user-manipulated though 16:59:13 mitr: which is why it seem to me the capability should be in base, but management tools probaly is our responsibility for now 16:59:31 unless the wks or cloud group turn out to have an equal stake going fwd 16:59:35 So are we agreed that this use-case fits our charter, then? 16:59:47 but unless we hear differently it seem like something in our laps 16:59:52 We can always share effort and be responsible for it. 16:59:56 +1 17:00:03 sgallagh: The other side of this is that PaaS management (e.g. all of OpenShift) really belongs into Cloud IMHO 17:00:48 sgallagh: and I'm not quite sure how important is the space of "raw containers" without having such an end-to-end management system 17:00:58 (i.e. I'll leave it up to those who know) 17:02:07 mitr: I *think* jzb said he expected the OpenShift type stuff to be their job too. 17:03:03 sgallagh: that's my understanding currently 17:03:07 Good. 17:03:29 but isn't paas a role of the server ? 17:03:37 So can we agree that general container management is our responsibility, though (as currently written in use-case 8)? 17:04:13 I still see a bit of conflict there, but I assume we'll sort it out 17:04:18 simo: I'd like to see PaaS Role as a Server Role contributed by the Cloud group, personally 17:04:38 sgallagh: +1 17:04:54 sgallagh: sounds good to me 17:05:09 interesting 17:05:14 assuming they agree to the roles definitions and restrictions etc... 17:05:52 sgallagh: that sounds reasonable. Maybe start a thread on the cloud mailing list about that? 17:06:09 Will do 17:06:22 ok, so any dissenting opinions on use-case 8? 17:06:33 mitr: the value of containers without centralized management is the value of service management without the same 17:06:57 davidstrauss: dumb it down for me please, are you saying "very little"? 17:07:12 mitr: No, I'm saying they're different layers. 17:07:31 sgallagh: I'll abstain on this one 17:07:34 davidstrauss: Also, we didn't say no centralized management 17:07:45 We just punted on a specific implementation of same 17:07:48 sgallagh: I understand. I'm just saying there's value without htat. 17:07:53 that* 17:08:11 if the idea is that one of the things the server wg will be producing is containerized server stacks, then it seems reasonable for server wg to handle designing the containerization system used, yes. 17:08:15 Ok, let's move on to the remaining topics. (Many apologies that we're running late) 17:08:37 sgallagh: It's hardly your fault we're running late. :-) 17:08:38 #agreed Use case 8 remains as written 17:08:43 +1 17:08:54 +1 17:09:14 #topic Use Case 6: Env/Stacks and deployment of frameworks 17:10:00 We identified frameworks as sort of a half-way problem between Roles and standard packages. 17:10:28 The plan was to discuss with Env/Stacks whether they saw production of framework stacks as part of their responsibilities. 17:10:45 I think this fell on the floor a bit, but we had some interesting discussions last week that I think are relevant 17:11:14 Specifically that we started talking about Roles being complete solutions and not supporting generic frameworks as roles. 17:11:18 I see Env/Stacks as mostly working in (1) some base-system installs and, mostly, (2) software collections 17:11:30 * mitr notes "Development using libraries not included in the distribution" as a primary Cloud use case 17:11:38 mitr: +1 17:12:39 Proposal: drop "easy deployment of a framework" from our charter. 17:12:47 +1 17:13:13 +1 17:13:23 +1 17:13:23 +1 (for the record) 17:13:50 +1 (as long as env/stacks handles this) 17:13:57 yeah. I'm good with dropping 17:14:00 +1 17:14:01 +1 17:14:07 sgallagh: Is it obvious that the Cloud's solution will easily transfer to Server? 17:14:16 although I really hope someone picks it up 17:14:20 because it *is* important 17:14:28 it doesn't have to be us though 17:14:32 mizmo: Well, in my mind I think by dropping this we're actually talking about saying "We're producing solutions" rather than "here's a bag of tools for you to work with". 17:15:00 That fits in my mind better with the Env/Stacks responsibilities and we'll build roles atop what they give us. 17:15:02 sgallagh: In other words, we are moving towards being an appliance rather than an OS? 17:15:20 mitr: I think the Roles are a little closer to that, but I wouldn't state it that strongly. 17:15:32 sgallagh, sure i just want to make sure someone (env/stacks) is providing the tools 17:16:11 sgallagh: (My generall worry is that we _don't have a generally accepted OS API beyond libc_ and are thus less an less an OS as the capabilities of the platform APIs of non-Linux systems expand) 17:17:28 mitr: understood. Let's talk more about that in the execution planning, because I *do* want us to be providing a larger, stable API. 17:17:34 But let's not divert right now. 17:17:38 Any dissenting opinions? 17:17:39 sgallagh: Another point, Cloud explicitly talks about non-packaged frameworks, but if we want to build Roles on top of env/stacks, we are assuming _packaged_ 17:17:49 Yes, we are. 17:18:11 Don't we kind of need this resolved? It's structurally equivalent to the "multiple repos" discussion we've had before 17:18:48 I suppose I'm +1 to dropping this from our charter but I'm very worried that it will end up without an agreement. 17:19:34 I really see it as an extension of the development language support that Env/Stacks is already committed to 17:20:00 And I'll make the effort to communicate that to the Env/Stacks group 17:20:05 #agreed drop "easy deployment of a framework" from our charter 17:20:30 #topic FIXME for headless operation 17:20:49 mitr: I'd like to think our OS API includes systemd, dbus (which will soon be kdbus), and the kernel. 17:20:52 Hopefully this will be a simple topic. I think we agree on what is being said here, we just need to figure out where it belongs in the doc 17:21:08 davidstrauss: none of these include things like hash tables or DB persistence 17:21:12 davidstrauss: It starts there, certainly. 17:21:56 mizmo: Can you take the reins on this one? You suggested in the doc that this might be best formulated as a use-case? 17:22:10 yeh, maybe something like.... (Sec) 17:23:12 10. The user must be able to install Fedora Server in a headless mode without pulling in a display framework (such as X.) 17:23:56 mizmo: Perhaps "install and fully(?) manage" 17:24:39 mizmo: s/pulling in/using/ please 17:25:05 well im not sure the right wording for that mitr 17:25:16 but i was thinking, it shouldn't even be installed on the system. not sitting there latent / unused 17:25:28 if i say without using, it's ambiguous whether or not it's sitting installed on the system 17:25:31 mizmo: The latter is harder to accomplish than you might think 17:25:32 mizmo: (I specifically don't want us to get hung on presence of libX11.so . Doesn't hurt anything.) 17:25:37 mitr: There is a persistent logging framework (the journal). 17:25:39 sgallagh, oh i know, i work with the anaconda team :) 17:25:50 davidstrauss: completely different thing 17:26:03 mitr, i've talked to admins in the past who didn't want X on the system at all bc of security concerns. that was a while ago tho. is this still relevant? 17:26:14 I agree with mitr. Let's phrase it as "without using a local display framework" 17:26:16 if it's not running, is it a risk? 17:26:16 mitr: It's unclear how a database would be part of an OS API. 17:26:21 mizmo: Not wanting /usr/bin/Xorg, perhaps. 17:26:35 is there a way we could address those admin's security concerns with this use case? 17:26:36 davidstrauss: And yet it is everywhere else, at least some layer over SQLite 17:26:48 mizmo: If it's not running or socket-activated, it's just wasting space, not providing an attack vector (generally) 17:26:56 okay how about this then 17:27:13 mitr: I'm not sure I understand the argument you're making. 17:27:30 davidstrauss: Windows has the registry, for example 17:27:34 10. The user must be able to install and manage Fedora Server in a headless mode where the display framework is not running or socket-activated. 17:27:41 thoughts? 17:28:03 maybe just "where the display framework is unavailable"? 17:28:16 ack 17:28:24 davidstrauss: I'm saying that we don't have hash tables, strings, HTTP client, almost _anything_ within "The OS API", so every project invents or chooses their own and they don't interoperate and we waste resources on dozens of string libraries and error handling libraries and hundreds of http clients and none of them can be integrated all that well 17:28:25 cannot rely on display being available 17:28:32 or that the user can see it 17:28:57 mizmo/sgallagh: Either is fine with me 17:28:58 mitr: We've been trying to standardize on things like NSS and libcurl, as far as I've seen. 17:29:08 davidstrauss: and it didn't work 17:29:13 sgallagh, how about "not active" ? 17:29:24 10. The user must be able to install and manage Fedora Server in a headless mode where the display framework is inactive. 17:29:26 but perhaps we choose the wrong thing to start the process, security is too hard 17:29:45 mizmo: +1 works for me 17:29:46 simo: No kidding. 17:30:25 okay i'll update the wiki? 17:30:30 +1 17:30:42 mizmo: sure 17:31:00 mizmo: If you want to do so, be my guest. Do you want to make the other agreed changes or shall I? 17:31:29 #agreed New use-case: "10. The user must be able to install and manage Fedora Server in a headless mode where the display framework is inactive." 17:31:48 sgallagh, i just made that one, i haven't been tracking the other ones 17:31:51 #topic Open Floor 17:31:56 * mizmo is double-booked right now :( 17:32:00 mizmo: No problem, I'll handle them after the meeting. 17:32:24 I'll update the draft after the meeting and send out an email to everyone on the WG. 17:32:36 We need a vote on whether this is ready to send to ratification. 17:32:44 Probably best to do over email. 17:33:15 oh 17:33:20 can i add this to the end of the usecase 17:33:21 We commit to supporting only those GUI applications that can work with forwarded X (or the equivalent on other windowing systems) 17:33:32 "10. The user must be able to install and manage Fedora Server in a headless mode where the display framework is inactive. We commit to supporting only those GUI applications that can work with forwarded X (or the equivalent on other windowing systems) 17:33:40 Is forwarded X going to still be a thing? 17:33:42 ? (that phrasing was from the earlier point, which i was just about to delete 17:33:45 * mizmo has no idea 17:34:08 * davidstrauss is pretty sure next-gen display management is not network-transparent. 17:34:08 I'd suggest that we avoid committing to that at this time 17:34:16 okay 17:34:25 i'll just delete it then it'll be in the history 17:34:43 Doesn't mean we can't do it, just that if we don't, we aren't going back on our word :) 17:34:46 this prd is hot, i like it 17:35:09 :) 17:35:11 mizmo: What's the question again? 17:35:13 yeah, seems fine to me... given that we can adjust later if implmenetation warrents. 17:35:28 IIRC the "we commit to" language was intended as a restriction, not a commitment do do something 17:35:30 mitr: Just whether we agree to support x-forwarded GUI management apps 17:35:54 mitr: I think the phrasing was ambiguous at least. 17:36:01 sgallagh: This was about the Oracle installer and the like, wasn't it? 17:36:04 And I think "this has to work headless" covers the needs well enough 17:36:22 mitr: Yes, but the Oracle one doesn't work forwarded anyway 17:36:37 hm. OK. 17:36:55 +1 on dropping the X stuff 17:36:58 I don't think the current phrasing is exclusive of loading a full desktop if they want to, just that nothing we ourselves put under the Fedora Server umbrella can require it 17:36:58 davidstrauss: wayland will render locally and transfer over the netwrok via smart portocol like spice, so yeah it will need a local renderer, but the point is that until it will have exporting capabilities it can't be relied on in the server 17:37:17 it doesn't really matter (imo) what the tech used to exporting is or where the rendering happens 17:37:20 Please don't quote my last line out of context. 17:37:32 davidstrauss, lol 17:37:44 :) 17:37:48 sgallagh, do we blog this draft for comment? 17:37:50 simo: Agreed. 17:37:56 mizmo: Yes, absolutely. 17:38:06 sgallagh, kk ill wait for you to make the edits and ill blog it later tonight? 17:38:13 I'll update the announce-list mail I sent earlier as well 17:38:43 mizmo: Sounds good to me. I blogged the earlier draft as well (in case you missed it) 17:39:09 sgallagh, i think i may have missed it (but its been a rough couple of weeks) 17:39:15 mizmo: I can understand that 17:39:19 * davidstrauss has to go 17:40:23 Anything else for Open Floor? 17:40:29 I'll close the meeting in two minutes if not. 17:42:25 Thank you everyone. We've got a PRD! 17:42:38 #endmeeting