16:31:11 #startmeeting fedora_coreos_meeting 16:31:12 Meeting started Wed Sep 30 16:31:11 2020 UTC. 16:31:12 This meeting is logged and archived in a public location. 16:31:12 The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:31:12 Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:31:12 The meeting name has been set to 'fedora_coreos_meeting' 16:31:15 .hello2 16:31:18 #topic roll call 16:31:18 cyberpear: cyberpear 'James Cassell' 16:32:06 .hello2 16:32:06 bgilbert: bgilbert 'Benjamin Gilbert' 16:32:16 .hello2 16:32:18 lucab: lucab 'Luca Bruno' 16:33:30 hi 16:33:30 .hello2 16:33:31 nasirhm1: Sorry, but you don't exist 16:33:44 .hello nasirhm 16:33:45 .hello2 16:33:45 nasirhm1: nasirhm 'Nasir Hussain' 16:33:48 jlebon: jlebon 'None' 16:34:14 #chair cyberpear bgilbert lucab rtsisyk nasirhm1 jlebon 16:34:14 Current chairs: bgilbert cyberpear dustymabe jlebon lucab nasirhm1 rtsisyk 16:34:17 welcome rtsisyk 16:34:41 Welcome rtsisyk 16:34:41 .hello2 16:34:42 lorbus: lorbus 'Christian Glombek' 16:34:52 #chair lorbus 16:34:52 Current chairs: bgilbert cyberpear dustymabe jlebon lorbus lucab nasirhm1 rtsisyk 16:35:01 also 16:35:03 .hello sohank2602 16:35:03 skunkerk: sohank2602 'Sohan Kunkerkar' 16:35:04 .chair lorbus 16:35:06 lorbus is seated in a chair with a nice view of a placid lake, unsuspecting that another chair is about to be slammed into them. 16:35:23 🤣 16:35:28 :D 16:35:30 dustymabe: Nice one :P 16:35:52 hello o/ 16:35:55 #topic Action items from last meeting 16:35:59 #chair cverna 16:35:59 Current chairs: bgilbert cverna cyberpear dustymabe jlebon lorbus lucab nasirhm1 rtsisyk 16:36:15 There were no action items from last meeting so we're good there 16:36:30 moving on 16:36:37 \o/ 16:36:37 #topic Move rpmdb path from /usr/share/rpm to /usr/lib/sysimage/rpm 16:37:08 This was a follow on from a bug discussion some time ago. 16:37:21 we recently had the database change in rpm from bdb to sqlite 16:37:40 and at the same time we also proposed to move the database location, but the rpm maintainer didn't want to mix too many changes at once 16:37:46 IIRC 16:38:03 so should we file a change request for Fedora 34 to make that happen in F34 time frame? 16:38:15 maybe smoothest path is to have rpm-ostree do a fallback to /usr/share/rpm if /usr/lib/sysimage/rpm doesn't exist? 16:38:27 are we trying to do this at the RPM level? 16:38:40 cyberpear: Sounds like a nice idea. 16:38:57 #link https://github.com/coreos/fedora-coreos-tracker/issues/639 16:39:34 lucab: thanks for that. sorry I failed to paste the link 16:39:52 I think walters was part of the original discussion 16:40:21 now I'm also trying to find a link to the bug where we were discussing this with the rpm maintainer 16:40:27 .hello2 16:40:27 aoei: aoei 'Joanna Doyle' 16:40:29 sory late 16:40:33 original thread is http://lists.rpm.org/pipermail/rpm-maint/2017-October/006681.html 16:40:41 * dustymabe waves at aoei 16:40:46 #chair aoei 16:40:47 Current chairs: aoei bgilbert cverna cyberpear dustymabe jlebon lorbus lucab nasirhm1 rtsisyk 16:40:47 hey dusty 16:40:55 #link http://lists.rpm.org/pipermail/rpm-maint/2017-October/006681.html 16:41:06 hey aoei! 16:41:49 dustymabe: https://bugzilla.redhat.com/show_bug.cgi?id=1838691#c24 16:41:56 that one? 16:42:01 that's it 16:42:50 would SUSE be annoyed at me if I told them to use ZFS instead 16:42:55 c: 16:43:06 so.. if we want this to happen we should take some steps to get there I believe. Is it something we want to happen? Should we file a change request for f34? 16:43:51 * dustymabe leans on jlebon and walters for expertise here since they have the most experience 16:44:27 it'd be a nice change, though would be good to have someone from rpm to lead it 16:44:55 as far as rpm-ostree goes, it wouldn't be very hard to adapt. but changing all of fedora clearly will need a lot of thought and discussions 16:45:37 jlebon: would you be willing to reach out to the rpm maintainers to see if that's something they'd take up. or maybe offer to co-lead if they seem hesitant? 16:46:01 dustymabe: i can do that, sure 16:46:04 if it goes thru, I'd expect a similar change for /var/lib/dnf 16:46:09 ok having read on a bit more maybe the /usr/lib/rpmdb move is slightly more realistic 16:46:13 we can own the "rpm-ostree-based variants" part of the equation 16:46:35 yeah, that would at least take that off the rpm maintainers "worry-sphere" 16:47:11 #action jlebon to reach out to the rpm maintainers to see if the relocation of the rpmdb path is something their willing to own for F34 16:47:59 #topic Add FCOS to AWS Marketplace 16:48:10 #link https://github.com/coreos/fedora-coreos-tracker/issues/635 16:48:19 bgilbert: do you want to intro this one? 16:48:25 sure 16:48:51 up to this point, we've considered AWS GovCloud and AWS Marketplace out of scope for Fedora CoreOS 16:49:01 the main reason is paperwork 16:49:32 "Fedora" would need to sign agreements etc., "Fedora" isn't an entity that can sign agreements, and then things get messy 16:50:09 but davdunc from AWS has posted a proposal at the Fedora level which could work around that problem 16:50:19 #link https://pagure.io/Fedora-Council/tickets/issue/332 16:50:46 basically, AWS would host Marketplace entries in an account it controls, on behalf of Fedora. 16:51:07 AWS CI would listen on fedmsg, pick up new AMI builds, copy the AMIs and send them through its publication pipeline. 16:51:29 once the Marketplace AMIs are updated (which can take a few hours) they'd send back another fedmsg reporting the AMI IDs. 16:52:07 to connect with this system, we'd need to emit fedmsgs when AMIs are ready, which is 16:52:09 #link https://github.com/coreos/fedora-coreos-tracker/issues/225 16:52:39 and we'd presumably want to add the Marketplace AMIs to stream metadata, so we'd want a job that listens for the reply fedmsg and PRs stream metadata. 16:52:48 there's some additional stuff around rollbacks, but that's the bulk of it. 16:52:51 what this gains us: 16:53:10 1) GovCloud and any other restricted AWS regions 16:53:23 2) a nice Marketplace landing page for FCOS 16:53:30 3) possibly some additional statistics 16:53:49 sounds pretty promising 16:53:50 4) easier FCOS access for AWS customers who have compliance requirements satisfied by Marketplace (e.g. its automatic security scans) 16:54:01 5) easier for other Marketplace providers to ship their own software on top of FCOS 16:54:46 I should note that this is still in the proposal phase, but I'm not aware of any opposition at the Fedora level 16:54:48 sounds great to me. it would be nice if we could retroactively update our metadata to include the marketplace AMIs just for tracking if a user comes to us with an AMI we can't figure out where it came from 16:54:53 huh didn't realize that marketplace AMIs were separate from the AMIs we already upload 16:55:01 bgilbert: are there any drawbacks or catches you're aware of ? 16:55:14 dustymabe: you mean the release metadata? 16:55:24 presumably we could do that too 16:55:33 jlebon: yup, Marketplace has always worked that way 16:55:43 ehh. not even necessarily that.. even if it was just the meta.json 16:55:55 somewhere where we could look at have that info correlated 16:56:03 but that's a stretch goal 16:56:09 i guess that means they'll also take care of GC? 16:56:12 meta.json is an implementation detail IMO, but sure 16:56:23 so biggest blocker from our end is https://github.com/coreos/fedora-coreos-tracker/issues/225 16:56:27 ? 16:56:27 jlebon: hmm, I hadn't thought about GC 16:56:52 it's kinda awkward to have a reference to something in our metadata that we don't actually own 16:56:53 jlebon: we're not paying for storage, so the only concern there is if we want to actively withdraw old images. do we? 16:56:54 bgilbert: so the stream metadata will contains two sets of AMI, the "normal" one and the marketplace one? 16:57:09 I think part of the point of Marketplace is that users retain access to old versions; not sure though 16:57:14 lucab: right 16:57:26 (multiplied per each region) 16:57:34 jlebon: we already have provisions to do that, e.g. for Packet and DO 16:57:43 jlebon: it's just that those identifiers will be static, not varying per release 16:57:48 lucab: right 16:57:50 * dustymabe really likes that google just has one image for $global 16:58:03 aoei: a couple drawbacks 16:58:30 aoei: we'd be shipping two sets of images per release, which could be confusing 16:59:09 we wouldn't directly control the Marketplace AMIs (in terms of API access to manipulate them) and there's publishing latency 16:59:18 right 16:59:23 those don't sound too bad imo 16:59:24 davdunc expects the latency to be on the order of a couple hours 16:59:26 silly question: the marketplace set sounds like a superset of our normal set, won't it obsolete the latter? 16:59:35 I can tell you that with Container Linux the latency sometimes ran to two weeks 16:59:49 bgilbert: i was about to say I don't trust the two hours thing :P 16:59:53 but this would be much more automated than the pipeline CL was using 17:00:01 ah okay 17:00:22 I think we'd still preferentially recommend the community AMIs for users who can use them 17:00:30 +1 17:00:32 but we'd need to be careful to explain the tradeoffs 17:00:41 right 17:00:43 makes sense 17:00:51 is it one landing page for FCOS in the marketplace or one per marketplace AMI? 17:01:09 one per 'product', i.e. release stream 17:01:12 it's a huge win that GovCloud users would be able to use it, though 17:01:19 we could probably just keep publishing the community ones on getfedora.org/coreos, but also link to the marketplace landing page 17:01:25 here's the old CL landing page: https://aws.amazon.com/marketplace/pp/B01H62FDJM 17:01:38 jlebon: that's not a bad idea 17:01:49 jlebon: maybe still list the AMIs in stream metadata though? 17:01:51 just not on the website 17:01:56 SGTM 17:02:22 so we should try to prioritize https://github.com/coreos/fedora-coreos-tracker/issues/225 ? and then they'll do most of the rest of the work? 17:02:34 bgilbert: i think so yeah. still need to think that part through 17:02:39 jlebon: +1 17:02:52 dustymabe: I think that's basically the situation. with the caveat that I don't know timelines. 17:03:05 I know davdunc has been working on this for a while already, so it may not move super-fast 17:03:10 looping back from fedmsg to github doesn't seem great, maybe we can skip that if there is no actual use of that information? 17:03:26 we definitely want 225 anyway for other purposes 17:03:33 sorry, I worded it badly 17:03:39 lucab: right, IMO we should do that loop only if needed 17:03:55 where does github come in? 17:04:00 stream metadata update 17:04:09 it is possibly great but with non-negligible additional infra and human involvement 17:04:27 got ya 17:04:31 strictly speaking we can always add that part later 17:04:36 * cverna can probably help with infra work 17:05:14 bgilbert: any particular outcomes we need to end up with from today's discussion? 17:05:42 dustymabe: I don't think so. mostly raising awareness. 225 was already something we wanted to do. 17:05:51 any questions I missed from the scrollback? 17:06:10 you answered mine 17:06:11 bgilbert: if we plan to being with "linking to marketplace landing page" then yes it seems reasonable to do the "fedmsg-to-PR" part only later and if needed 17:06:18 *begin with 17:06:19 i think everyone is in agreement 17:06:31 thanks for bringing this up bgilbert! 17:06:43 indeed, thanks bgilbert! 17:06:47 shall we move to the next topic ? 17:06:49 +1 this sounds really nice 17:06:50 lucab: I think the fedmsg-to-PR part would be a good thing to do, but yeah, we can figure that out 17:06:54 dustymabe: +1 17:07:04 #topic Add basic monitoring tools to the base image 17:07:10 #link https://github.com/coreos/fedora-coreos-tracker/issues/628 17:07:30 bgilbert: I had "why not deprecating the community set" but I think you answered already with "latency could be very high and with variance" 17:07:40 lucab: +1 17:07:53 so this topic is a bit open ended 17:08:05 but it has a general theme 17:08:19 of generically useful tools that we might want to add to the base image 17:08:28 i think each one of them would need to be considered individually 17:08:32 Re #628. My proposal is to add some essential troubleshoting tools such as `strace`, `tcpdump`, `iotop`, `ioping` and others to the base image. 17:08:44 yes, let's discuss this list one by one. 17:09:09 rtsisyk: we might now have enough time for all of them today. if we don't we can carry the discussion in the ticket 17:09:12 - `strace` - starce can be used to troubleshoot the container runtime itself. strace is just 1.5MB, wht don't add it? 17:09:29 ok 17:09:40 what do you suggest? 17:09:57 bgilbert: lucab: jlebon: do you have any preference on how we carry forward this discussion? 17:10:08 in-ticket would be better IMO 17:10:09 is going through each one ideal, or do you think there would be a better way? 17:10:14 i think one useful criterion is whether the tool can be helpful in debugging container runtime issues 17:10:32 right, basically if you can get network and run a container without the tool, you should probably run the tool in a container 17:10:35 (mentioned this in https://github.com/coreos/fedora-coreos-tracker/issues/628#issuecomment-697765701) 17:10:56 bgilbert: yep. 17:11:18 one example that I use is `dig`.. for example if you can't debug dns you might not be able to download a container in order to debug dns :) 17:11:24 that's why `dig` is in FCOS 17:11:49 just going to try to advocate for at least one of those right now: strace. i think we should include it 17:11:55 I agree that almost everything can be run via container. But anyway. The primary purpose of a host operating symstem (such as CoreOS) is a work with the underlying hradrware. Tools like `dmidecode`, `smartmontools`, `pciutils`, `usbutils` are needed to troubleshoot the bare metal. 17:12:00 tcpdump is needed to debug the network 17:12:05 just to set the background, "it's just 1KiB" isn't a focal point. "how can this be abused as a turing complete interpreter" and "did you try running in podman" and "in which case this is needed and can't be layered for debugging" are more interesting IMHO 17:12:19 I can't download container without working network, but I have IPMI/BMC and tcpdump to fix the network. 17:12:51 dig is extremely useful 17:12:56 so should we lay out some criterion like jlebon stated in the ticket and then evaluate each one (probably in the time between now and next meeting)? 17:13:16 yeah, make clear criteria for the ticket 17:13:20 then analyse each case with those 17:13:24 dustymade +1 17:13:26 there are some things that might not meet the criteria, but we still want to include 17:13:27 we're going to keep having questions about this, so yeah, we probably need formal criteria at this point 17:13:29 dustymabe sorry 17:13:47 SGTM 17:13:48 should we try to work out that criteria now? 17:14:09 yup, key one being as you mentioned, is it needed to fix something that might stop you downloading the image 17:14:19 like, that's a pretty high priority consideration right 17:14:45 there are a lot of tools that could conceivably be used to troubleshoot things 17:14:48 another would be is it impossible to run in a container? 17:14:57 gdb would be useful for debugging podman but we shouldn't ship it 17:14:59 bgilbert: true, but mayb ewe can limit that 17:15:06 collaborate hacking doc: https://hackmd.io/3LlGUzYjT-C3sbkIDBquKQ 17:15:10 I vote +1 for GDB :) 17:15:10 gdb is less likely to fix a networking issue stopping you from getting a container image 17:15:39 the only reason I excluded GDB is because it depends on Python 17:16:17 dustymabe: should some negative criteria be included too? (like, depends on python) 17:16:20 I am not sure gdb is useful for golang, and it can be debugged remotely using a static stub 17:16:54 gdb can be used for getting the stack traces 17:17:07 but anyway, I don't insist 17:17:12 tcpdump is really needed 17:17:15 rtsisyk: a minimal OS is never going to feel as comfortable at a command prompt as a full server OS. we can't ship everyone's preferred tools (or even anyone's preferred tools) and still be minimal. 17:17:48 ok. What do you suggest to do without `lspci`, for example? 17:18:07 how to understand that this server has NVIDIA GPU, for example? 17:18:18 shouldn't that work fine in a privileged container? 17:18:28 yeah I think things like dmidecode and lspci have merit 17:18:30 everything can be done in a privilefed container 17:18:33 (just tried it here in my pet container and it's not lying :) ) 17:18:44 the goal is to make life easier for the ops team 17:18:48 jlebon: think about the live installer environment if someone is doing hardware discovery 17:18:51 rtsisyk: that's what toolbox is for 17:19:14 do you meen an image which used by OpenShift Console when I press "Console" button? 17:19:56 oh lol depends on python is already in there 17:20:01 OpenShift has some image used to enter into TTY via web 17:20:11 but this image also don't have tools... :) 17:20:23 rtsisyk: I mean /usr/bin/toolbox 17:20:34 let me check 17:20:38 ahh.. bgilbert yeah we should probably mention toolbox in this list of questions 17:20:52 do we have a "debugging" container that already exists 17:20:59 * dustymabe vaguely remembers one called 'tools' 17:21:09 more generalized: any container image that contains your preferred toolset and can be run by podman in privileged mode 17:21:19 /usr/bin/toolbox 17:21:20 toolbox: missing command 17:21:20 These are some common commands: 17:21:20 create Create a new toolbox container 17:21:20 enter Enter an existing toolbox container 17:21:22 list List all existing toolbox containers and image 17:21:28 Download registry.fedoraproject.org/f32/fedora-toolbox:32 (500MB)? [y/N]: 17:21:31 hmm 17:21:35 I have never seen this feature 17:21:41 yeah there is tools container in fedora but it is not really maintain afaik 17:21:56 https://src.fedoraproject.org/container/tools 17:22:04 rtsisyk: see https://cloud.google.com/container-optimized-os/docs/how-to/toolbox to get an idea of how that flow works 17:22:06 'git grep toolbox' in fedora-coreos-docs returns no hits :-( 17:22:24 https://src.fedoraproject.org/container/tools/blob/master/f/Dockerfile 17:22:26 we should maybe flesh that out.. anything that can reasonably be done in there we should maybe point users to and add documentation for 17:22:40 this list looks like mine 17:22:49 everyone needs the same tools... :-) 17:23:16 but we shouldn't require things that are needed for debugging the container runtime or network those should go higher in the list than "debugging container" 17:23:20 rtsisyk: in a container image, absolutely yes 17:23:34 right, now is this toolbox something you still need to pull after firing up an image? looks like that judging from the google page 17:23:34 I have to same that container/tools is almost the same list I have in my Ansible playbooks 17:23:51 filed https://github.com/coreos/fedora-coreos-docs/issues/187 for toolbox 17:24:05 thanks bgilbert.. us fleshing out our toolbox story has been long overdue 17:24:08 OpenShift also have some `toolbox` image 17:24:09 this might be the kick we need 17:24:12 to clarify: there's toolbox and tools. the former is very much actively used, though IIRC there were still issues with running it as root on FCOS 17:24:28 but those might have been fixed now 17:24:35 jlebon: that might be fixed.. we need to investigate 17:24:37 in openshift console you can one Compute -> Nodes -> nodename -> Terminal and it will launch a toolbox-like image 17:24:54 rtsisyk: right. yep. this is a similar concept 17:24:59 but tht image has almost nothing 17:25:07 would it be acceptable to have the toolbox image already included in the fcos image? 17:25:10 even iostat doesn't present 17:25:16 everyone needs iostat 17:25:23 ask any ops 17:25:24 top 17:25:25 iostat 17:25:29 aoei: probably not pre-seeded 17:25:30 are must have 17:25:38 ok dusty 17:25:55 this came up in RHCOS-land too. i think it'd be cool to have some way to embed container images in the VM/metal images 17:26:01 but i guess this is also for more general admin than just fixing network/crun in order to get the container working in the first place 17:26:02 doesn't help the live case though 17:26:20 silverblue does that with flatpak I think 17:26:21 rtsisyk: i think that's fair. there will probably be some we do want to include. we're just thinking generically right now 17:26:32 ok 17:26:36 proposal: let's file a tracker ticket to come up with rough criteria for adding packages 17:26:36 jlebon: nor the general case of any cloud image, no? 17:26:51 I don't think we'll ever reduce it to a mechanical set of rules 17:26:58 right, but guidelines i guess 17:27:12 and then consider proposals for individual packages. 17:27:23 bgilbert: yeah it's probably not a mechanical set of rules, but a set of "questions" is a good way to start the conversation about $tool 17:27:24 lucab: not those we upload ourselves, but user-uploaded ones sure 17:28:00 in general, rtsisyk, I don't think adding a large set of tools to the distro is something we're enthusiastic about doing. we absolutely do need better docs about how to use those tools in a container, though. 17:28:12 #action we'll file a tracker ticket to come up with rough criteria for adding packages 17:28:24 dusty have you actioned the thing about better toolbox docs? 17:28:28 @bgilbert, I agree 17:28:42 / fleshing them out 17:28:45 #link https://github.com/coreos/fedora-coreos-docs/issues/187 17:28:47 aoei: ^ 17:28:57 ah right thats that one 17:29:05 nice bgilbert 17:29:47 rtsisyk: though each tool will stand on its own merit so don't be afraid to propose any :) 17:29:52 right 17:29:52 and thanks for the list you did come up with 17:30:11 this will help kick us in the right direction to flesh out this part of our user experience 17:30:15 no problem, thanks 17:30:22 also trying to clear out any possible misunderstanding: including any package in FCOS does not directly result in that ending up in RHCOS/OpenShift 17:30:26 I will try to play with priv container. 17:30:41 I use OKD 17:30:46 lucab: correct. It will end up in OKD (since it's based on FCOS) 17:30:59 but not necessarily RHCOS/OCP 17:31:10 rtsisyk++ 17:31:10 dustymabe: Karma for rtsisyk changed to 1 (for the current release cycle): https://badges.fedoraproject.org/tags/cookie/any 17:31:17 wow 17:31:32 ok I did poor time management today 17:31:36 #topic open floor 17:31:42 indeed, this is good to discuss. thanks rtsisyk! 17:31:48 thanks, guys! 17:31:56 a few other things I wanted to get to but we'll catch them next week 17:32:06 anyone with anything for open floor? 17:32:37 for the f33 rebase I'm going to start opening PRs soon to get some of the work to land in testing-devel that will enable f33 17:32:51 we won't be moving over testing-devel, just prepping it and conditionalizing some logic 17:33:09 we will flip next-devel when ready. hopefully thursday or friday 17:33:27 next week's stable release will require PXE booting with the rootfs image 17:33:33 if anyone hasn't switched over yet, now is the time :-) 17:33:50 #info next week's stable release will require PXE booting with the rootfs image if anyone hasn't switched over yet, now is the time :-) 17:34:07 any details about how to switch to the root image? 17:34:17 I'm still on 20200628 or something like that 17:34:19 yeah, one sec 17:34:58 #link https://docs.fedoraproject.org/en-US/fedora-coreos/bare-metal/#_pxe_rootfs_image 17:35:02 im trying to work out whether or not this matters for my VM - do providers simulate PXE boot for VMs (i doubt this) or is this only an issue for FCOS run on bare metal? 17:35:04 #link https://docs.fedoraproject.org/en-US/fedora-coreos/bare-metal/#_pxe_rootfs_image 17:35:21 #undo 17:35:21 Removing item from minutes: 17:35:29 providers use cloudinit 17:35:32 aoei: what provider do you use? 17:35:35 rtsisyk: nothing uses cloudinit :-) 17:35:35 GCP dusty 17:35:42 aoei: you're good then 17:35:44 aoei: more specifically live environments 17:35:49 okay, cool 17:35:59 aoei: mostly bare metal, but some people PXE boot the live image in e.g. VMware 17:36:05 you'd know you were doing it though 17:36:06 ahh okay 17:36:15 yeah i figured but i wanted to just check 17:36:17 thanks 17:36:21 rtsisyk: for FCOS we use the user data mechanism to pass an Ignition config to Ignition 17:36:25 we don't happen to use cloud-init 17:36:28 I will try to update some of my OKD clusters to a new rootfs-based image in the next couple weeks. 17:36:52 ok i'll close out the meeting soon 17:37:03 dustymabe: ok, I misled by my OpenStack background. 17:37:12 for first boot I use the gcloud cli stuff and some containers for generating the inition file to fire up my instance 17:37:29 updates automatically reboot my VM, I actually missed this in the docs until I saw it happen for real lol 17:37:35 thanks for running the meeting dustymabe 17:37:38 bgilbert: slightly off-topic, though i feel like we should blog about the rootfs image and osmet for fun. it's pretty cool stuff 17:37:38 #endmeeting