2025-02-13 17:30:04 <@tflink:fedora.im> !startmeeting fedora-ai-ml-sig 2025-02-13 17:30:05 <@meetbot:fedora.im> Meeting started at 2025-02-13 17:30:04 UTC 2025-02-13 17:30:05 <@meetbot:fedora.im> The Meeting name is 'fedora-ai-ml-sig' 2025-02-13 17:30:08 <@tflink:fedora.im> !topic welcome 2025-02-13 17:30:09 <@tflink:fedora.im> !hello 2025-02-13 17:30:10 <@zodbot:fedora.im> Tim Flink (tflink) 2025-02-13 17:30:21 <@man2dev:fedora.im> !hi 2025-02-13 17:30:22 <@zodbot:fedora.im> Mohammadreza Hendiani (man2dev) 2025-02-13 17:30:23 <@tflink:fedora.im> who all's here for the ai-ml sig meeting? 2025-02-13 17:33:28 <@tflink:fedora.im> let's get started and hopefully a few more folks will find their way here in the meantime 2025-02-13 17:33:41 <@tflink:fedora.im> !topic previous meeting follow-up 2025-02-13 17:33:50 <@tflink:fedora.im> !link https://discussion.fedoraproject.org/t/figuring-out-npu-support-in-fedora/143717 2025-02-13 17:34:22 <@ludiusvox:fedora.im> Hey Tflink how are you 2025-02-13 17:34:43 <@tflink:fedora.im> !info last meeting we talked about NPU support in Fedora. There has been one update to the discourse thread about what is needed for Intel's NPUs but more discussion will be needed 2025-02-13 17:34:55 <@mystro256:fedora.im> !hi 2025-02-13 17:34:57 <@tflink:fedora.im> anyone have anything to add on the NPU discussion from last week? 2025-02-13 17:35:00 <@zodbot:fedora.im> None (mystro256) 2025-02-13 17:35:00 <@ludiusvox:fedora.im> Tom Rix: I showed up I know my attendance been spotty 2025-02-13 17:35:28 <@tflink:fedora.im> I assume not for the moment or it'll come up in today's topics :) 2025-02-13 17:35:29 <@trix:fedora.im> !hi 2025-02-13 17:35:31 <@zodbot:fedora.im> Tom Rix (trix) 2025-02-13 17:36:03 <@tflink:fedora.im> moving on to today's topics 2025-02-13 17:36:10 <@tflink:fedora.im> !topic Laptop testing for F42 2025-02-13 17:36:20 <@tflink:fedora.im> Tom Rix: this one is yours, take it away 2025-02-13 17:36:22 <@ludiusvox:fedora.im> I wasn't here last time and I am not going to ask to be caught up, so it's fine. 2025-02-13 17:36:22 <@ludiusvox:fedora.im> The last meeting I went to we were planning on training granite for use of documentation assistance and I demonstrated RAG-bot LLM code with langchain. 2025-02-13 17:36:22 <@ludiusvox:fedora.im> Can I ask as to the status of the Granite LLM? 2025-02-13 17:36:22 <@ludiusvox:fedora.im> 2025-02-13 17:36:22 <@ludiusvox:fedora.im> 2025-02-13 17:36:54 <@tflink:fedora.im> Aaron Linder: would it be ok to leave that for open floor? 2025-02-13 17:37:01 <@ludiusvox:fedora.im> Yes sir 2025-02-13 17:37:48 <@trix:fedora.im> I enabled a lot of AMD laptops i don't have have. anyone want to poke at ai in the F42 testing now ? 2025-02-13 17:37:53 <@xanderlent:fedora.im> !hi 2025-02-13 17:37:54 <@zodbot:fedora.im> Alexander Lent (xanderlent) - he / him / his 2025-02-13 17:38:05 <@tflink:fedora.im> which ones were enabled? 2025-02-13 17:38:28 <@trix:fedora.im> 680M, 780M, strix* 2025-02-13 17:38:50 <@trix:fedora.im> folks likely have 680M or 780M 2025-02-13 17:38:51 <@tflink:fedora.im> !info several AMD laptops have been enabled for the AI stack (680M, 780M, strix*) and testing would be appreciated if folks have the hardware and time 2025-02-13 17:39:20 <@tflink:fedora.im> I have a 780M and will try to do some testing over the weekend 2025-02-13 17:39:26 <@trix:fedora.im> thanks! 2025-02-13 17:39:37 <@tflink:fedora.im> are you looking for anything in particular? just some basic poke-at-pytorch? 2025-02-13 17:39:54 <@trix:fedora.im> pytorch would be good. 2025-02-13 17:40:09 <@trix:fedora.im> i only run the unit tests, i don't really do much else. 2025-02-13 17:41:24 <@tflink:fedora.im> is there anything else that folks wanted to add on this? 2025-02-13 17:41:51 <@ludiusvox:fedora.im> I appreciate the ability and ease of use and performance of my AMD 6800 on fedora workstation 2025-02-13 17:42:16 <@trix:fedora.im> that also needs testing, i don't have a 6800. 2025-02-13 17:42:29 <@tflink:fedora.im> yeah, we have almost no representation for gfx10 2025-02-13 17:43:03 <@ludiusvox:fedora.im> I would need assistance on running the rest because I don't have F42 rawhide installed to be able to do experimental testing 2025-02-13 17:43:24 <@trix:fedora.im> i have container thingie.. 2025-02-13 17:43:36 <@ludiusvox:fedora.im> Okay I we can figure it out later 2025-02-13 17:44:00 <@tflink:fedora.im> it might be interesting to see containerized test results. for better or worse the kernels tend to be similar if not the same in fedora releases 2025-02-13 17:44:09 <@trix:fedora.im> https://github.com/trixirt/rocm-distro-containers/blob/main/fedora/f42/pytorch/Dockerfile 2025-02-13 17:44:28 <@trix:fedora.im> lets talk about container later, i just cooked it up yesterday for someone else. 2025-02-13 17:44:29 <@ludiusvox:fedora.im> I'll put this in Google keep 2025-02-13 17:44:33 <@tflink:fedora.im> but yeah, we can talk about containerized testing outside of the meeting 2025-02-13 17:44:41 <@man2dev:fedora.im> Mock v 6 support container builds now 2025-02-13 17:45:04 <@tflink:fedora.im> anything else on this topic or shall we move on? 2025-02-13 17:45:11 <@trix:fedora.im> move on. 2025-02-13 17:45:33 <@tflink:fedora.im> !topic non x86 handing 2025-02-13 17:45:50 <@trix:fedora.im> pytorch is also build for aarch64. 2025-02-13 17:45:52 <@tflink:fedora.im> the perpetual topic that ends in "nobody has HW" :) 2025-02-13 17:46:01 <@trix:fedora.im> i don't have that hw. 2025-02-13 17:46:16 <@tflink:fedora.im> does rocm even build for aarch64? 2025-02-13 17:46:22 <@ludiusvox:fedora.im> Okay I made a sticky note for later 2025-02-13 17:46:32 <@tflink:fedora.im> or is that cpu only? 2025-02-13 17:46:35 <@trix:fedora.im> not really, that is another reason it has no love from me. 2025-02-13 17:46:41 <@trix:fedora.im> yes, cpu only. 2025-02-13 17:47:30 <@mystro256:fedora.im> rocm is spotty on nonx86 2025-02-13 17:47:32 <@tflink:fedora.im> !info testing is needed for cpu-only build of pytorch on aarch64 2025-02-13 17:47:39 <@mystro256:fedora.im> right now it doesn't build for abig chunk 2025-02-13 17:47:43 <@trix:fedora.im> this is also a warning, if it seriously breaks on aarch64, i will not fix it. 2025-02-13 17:48:13 <@mystro256:fedora.im> basically if it compiles, it's use at your own risk 2025-02-13 17:48:35 <@mystro256:fedora.im> I mean you could run pytorch on cpu no? 2025-02-13 17:48:40 <@mystro256:fedora.im> not sure if it makes sense though 2025-02-13 17:48:56 <@ludiusvox:fedora.im> Okay 2025-02-13 17:49:13 <@tflink:fedora.im> it works, it just tends to take forever on anything non-trivial :) 2025-02-13 17:49:40 <@tflink:fedora.im> the only accelerator I know of right now that works with aarch64 is some nvidia stuff 2025-02-13 17:50:09 <@trix:fedora.im> anyone have time & interested ? 2025-02-13 17:50:29 <@tflink:fedora.im> !info if you do end up testing pytorch with aarch64 HW, please report your findings in #ai-ml:fedoraproject.org 2025-02-13 17:50:53 <@tflink:fedora.im> I don't have any aarch64 HW available for testing, unfortunately 2025-02-13 17:51:04 <@xanderlent:fedora.im> I know of some other folks working on aarch64 AI/ML accelerators, (for example Mesa libTeflon or the Asahi ANE project) but nothing w.r.t. pytorch specifically. 2025-02-13 17:51:16 <@man2dev:fedora.im> Im looking into how to add aarch53 without breaking x86 2025-02-13 17:51:34 <@man2dev:fedora.im> Im looking into how to add aarch64 without breaking x86 2025-02-13 17:51:44 <@trix:fedora.im> any way.. i think we can move on. 2025-02-13 17:51:46 <@tflink:fedora.im> yeah, I think there are more accelerators coming but nothing else is working and available right now that I know of 2025-02-13 17:51:48 <@tflink:fedora.im> yep 2025-02-13 17:51:56 <@ludiusvox:fedora.im> I have a mediatek Chromebook, but I don't get a distrochoice in android as to what container I install and I heard there will be changed with android and ChromeOS in general 2025-02-13 17:51:58 <@man2dev:fedora.im> But ihaven't find anytging conclusive to testout 2025-02-13 17:52:13 <@tflink:fedora.im> !topic Granite and documentation assistance 2025-02-13 17:52:31 <@tflink:fedora.im> Aaron Linder: I wasn't sure what to call this but it's your topic from earlier 2025-02-13 17:52:55 <@ludiusvox:fedora.im> Thanks, as I was saying earlier what's the status granite is coming along, I have seen some users on social media worried about telemetry and privacy 2025-02-13 17:53:24 <@tflink:fedora.im> I'm not sure there has been any progress in Fedora but to be honest, I haven't been paying a ton of attention 2025-02-13 17:53:40 <@ludiusvox:fedora.im> I think that IBM has been working on it for red hat 2025-02-13 17:53:48 <@tflink:fedora.im> in Fedora? 2025-02-13 17:54:16 <@ludiusvox:fedora.im> I am not able to locate this article or actually it was a YouTube podcast but I read the general headline about it and I posted my 02 cents 2025-02-13 17:54:31 <@tflink:fedora.im> I know that there was a new version of the granite models released in the last week or so. I think they're available on HF 2025-02-13 17:54:38 <@ludiusvox:fedora.im> I can go look on YouTube rq with a query 2025-02-13 17:54:51 <@ludiusvox:fedora.im> Let me go look brb 2025-02-13 17:57:07 <@xanderlent:fedora.im> https://blogs.gnome.org/uraeus/2025/02/03/looking-ahead-at-2025-and-fedora-workstation-and-jobs-on-offer/ 2025-02-13 17:57:07 <@xanderlent:fedora.im> There is apparently work going on at IBM/RH on Granite and AI in general in Fedora, including accelerated workloads, according to this blog post: (first heading, "Artificial Intelligence") 2025-02-13 17:57:23 <@ludiusvox:fedora.im> https://www.fudzilla.com/news/60487-red-hat-plans-to-integrate-ai-with-ibm-s-granite-engine#:~:text=The%20Red%20Hat%20team%20has%20announced%20plans%20to,environments%20%28IDEs%29%20and%20create%20an%20AI-powered%20Code%20Assistant. 2025-02-13 17:57:34 <@ludiusvox:fedora.im> And it's something with fedora also 2025-02-13 17:57:56 <@ludiusvox:fedora.im> This is not the original article but I have heard some buzz about this 2025-02-13 17:58:15 <@tflink:fedora.im> honestly, I'm not sure what exactly is planned. I think that's all workstation stuff 2025-02-13 17:58:59 <@tflink:fedora.im> it'll be interesting to see what they have planned but I'll bet it centers around ramallama and maybe toolbx 2025-02-13 17:59:11 <@ludiusvox:fedora.im> But nobody in here knew about this and I don't know who to ask 2025-02-13 17:59:14 <@tflink:fedora.im> I don't recall if the ramallama review passed or not 2025-02-13 18:00:02 <@tflink:fedora.im> the fedora workstation room would be a good place to start, I think 2025-02-13 18:00:08 <@man2dev:fedora.im> Ramallama is in 2025-02-13 18:00:30 <@tflink:fedora.im> cool, thanks for the update 2025-02-13 18:01:00 <@xanderlent:fedora.im> 2025-02-13 18:01:00 <@xanderlent:fedora.im> Seems exactly that: 2025-02-13 18:01:00 <@xanderlent:fedora.im> "We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points." 2025-02-13 18:01:27 <@tflink:fedora.im> which would run locally but then HW enablement is an issue 2025-02-13 18:02:09 <@ludiusvox:fedora.im> Which I am okay with it, if the person who knew about this was in here I would only request model variability if running off of an ollama compatible system to have model quantization for lower performance machines. 2025-02-13 18:02:09 <@ludiusvox:fedora.im> But I have a feeling that this will be a telemetry based system with an API, I guess we would have to ask the system architect who I met at flock he probably knows something about this 2025-02-13 18:02:09 <@ludiusvox:fedora.im> 2025-02-13 18:02:28 <@tflink:fedora.im> ramallama is local only AFAIK 2025-02-13 18:02:42 <@tflink:fedora.im> its roughly equivalent to ollama and vllm in terms of functionality 2025-02-13 18:03:27 <@tflink:fedora.im> I'd be a bit surprised if the AI stuff was anything other than local just due to cost 2025-02-13 18:03:52 <@man2dev:fedora.im> Its just llama-cpp and vllm with multiple backends based on how you setup your Container 2025-02-13 18:03:52 <@ludiusvox:fedora.im> Okay I will have to go test ramallama I have gotten Ollama to work with a custom install.sh to make compatible with AMD GPU which when I get home I can post somewhere the custom install.sh 2025-02-13 18:04:34 <@tflink:fedora.im> ah, ok. I haven't gotten around to actually looking at it yet, I've just been hearing about it :) 2025-02-13 18:04:59 <@tflink:fedora.im> was rocm enabled for ramallama? 2025-02-13 18:05:26 <@man2dev:fedora.im> I Don't remmber 2025-02-13 18:06:31 <@ludiusvox:fedora.im> I am not sure I don't know enough about ramallama, but the increased capabilities I have had in python packages for ollama I found a 1/2 way done repository which I am talking to the author, and I got ollama working for F41 for ROCm and it's a package to apply for jobs for me a robot it's funny it's just WIP work in progress 2025-02-13 18:06:55 <@man2dev:fedora.im> I know vllm had cuda image 2025-02-13 18:07:11 <@ludiusvox:fedora.im> Yeah I think that some custom work needs to be done for ROCm 2025-02-13 18:07:24 <@tflink:fedora.im> vllm can be built with rocm support but I think there are patches required from amd 2025-02-13 18:08:00 <@ludiusvox:fedora.im> So I think a ramallama option based modification to the install.sh would need to be done 2025-02-13 18:08:19 <@ludiusvox:fedora.im> Or it is an rpm package 2025-02-13 18:08:46 <@tflink:fedora.im> it sounds like ramallama was approved as a fedora package 2025-02-13 18:09:22 <@ludiusvox:fedora.im> So an rpm auto detect system hardware I have no idea how to inspect packages I just can do install shell scripts 2025-02-13 18:10:27 <@man2dev:fedora.im> It mostly seems to use podman functionality 2025-02-13 18:10:30 <@tflink:fedora.im> Aaron Linder: have your questions been somewhat answered? at least to the point where you know where to ask for more info? 2025-02-13 18:12:13 <@ludiusvox:fedora.im> Yes sir thank you 2025-02-13 18:12:20 <@ludiusvox:fedora.im> I will get on ramallama 2025-02-13 18:13:03 <@tflink:fedora.im> cool, then we can move on to 2025-02-13 18:13:06 <@tflink:fedora.im> !topic open floor 2025-02-13 18:13:13 <@tflink:fedora.im> any other things that folks want to bring up? 2025-02-13 18:14:16 <@ludiusvox:fedora.im> I somehow got mediapipe working on local machine F41 I think it's tensorflow lite I have no idea how it works because I know we haven't successfully compiled tensorflow yet 2025-02-13 18:14:39 <@ludiusvox:fedora.im> Outside of containers I got working 2025-02-13 18:14:41 <@tflink:fedora.im> yeah, tensorflow is a daunting task 2025-02-13 18:15:04 <@ludiusvox:fedora.im> I got a container working g with NVIDIA I haven't tried ROCm 2025-02-13 18:15:57 <@ludiusvox:fedora.im> Media pipe I was using it for biomedical data collection from MP4 files of distinguishing between faces and hands and it works somehow 2025-02-13 18:16:34 <@ludiusvox:fedora.im> Maybe it doesn't need a GPU not sure 2025-02-13 18:17:41 <@ludiusvox:fedora.im> Let me show mediapipe library 2025-02-13 18:19:26 <@ludiusvox:fedora.im> https://github.com/google-ai-edge/mediapipe 2025-02-13 18:20:54 <@xanderlent:fedora.im> A quick status update on my work on the Intel NPU stack: 2025-02-13 18:20:54 <@xanderlent:fedora.im> - I've also been working in parallel on getting more of the high-level stuff packaged; This is things like the Audacity and GIMP AI plugins that use the NPU through OpenVINO. Still very much WIP, though. 2025-02-13 18:20:54 <@xanderlent:fedora.im> - Except for the big part of the driver, the compiler-in-driver; I'm still wrestling that ball of code into an RPM. Luckily it is semi-separable from the rest. 2025-02-13 18:20:54 <@xanderlent:fedora.im> - The core driver/firmware stuff is getting closer to ready for upstreaming. (Even just getting the firmware into Fedora would be useful because then you could run Intel's Ubuntu user-space bits in a container.) 2025-02-13 18:20:54 <@xanderlent:fedora.im> 2025-02-13 18:21:49 <@tflink:fedora.im> sounds like progress, though. and that's not a small task 2025-02-13 18:21:59 <@zodbot:fedora.im> tflink gave a cookie to xanderlent. They now have 1 cookie, 1 of which was obtained in the Fedora 41 release cycle 2025-02-13 18:22:56 <@ludiusvox:fedora.im> Okay I look at NPU manufacturers I no longer own an NPU 2025-02-13 18:23:15 <@xanderlent:fedora.im> Thanks. If I had AMD hardware I'd also be looking at the XDNA stack, but I'm currently focused on what I can test. 🙂 2025-02-13 18:23:42 <@tflink:fedora.im> yeah, that's a limitation we all have - hardware is expensive and constantly changing. money and time are finite 2025-02-13 18:23:53 <@tflink:fedora.im> anyhow, we're pretty much out of time for today 2025-02-13 18:24:14 <@tflink:fedora.im> if there are no other topics, I'll close out today's meeting and we can move conversation to #ai-ml:fedoraproject.org 2025-02-13 18:24:49 <@tflink:fedora.im> thanks for coming, everyone 2025-02-13 18:24:58 <@tflink:fedora.im> !endmeeintg 2025-02-13 18:25:04 <@tflink:fedora.im> !endmeeting