--- Log opened Fri Feb 09 00:00:53 2024 01:43 < hprmbridge> kanzure> "Neural abstract Reasoner" https://arxiv.org/abs/2011.09860 01:52 < hprmbridge> kanzure> https://openreview.net/forum?id=6Um8P8Fvyhl 01:53 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 03:52 -!- AugustaAva [~x@yoke.ch0wn.org] has quit [Ping timeout: 264 seconds] 03:54 < kanzure> instead of curing the FDA, what you need is more of the sickness 04:20 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 06:33 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 255 seconds] 06:36 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 07:45 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 07:45 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 08:15 -!- gptpaste [~x@yoke.ch0wn.org] has quit [Remote host closed the connection] 08:16 -!- EmmyNoether [~EmmyNoeth@yoke.ch0wn.org] has quit [Remote host closed the connection] 08:16 -!- EmmyNoether [~EmmyNoeth@yoke.ch0wn.org] has joined #hplusroadmap 08:31 -!- millefy [~Millefeui@91-160-78-132.subs.proxad.net] has joined #hplusroadmap 09:05 < kanzure> sam altman seeking $7 trillion for compute and energy for AI projects 09:05 < kanzure> millefy: welcome. 09:05 < hprmbridge> soul_syrup> typical 09:05 < hprmbridge> soul_syrup> lol 10:17 < hprmbridge> kanzure> "Blood biomarker profiles and exceptional longevity: comparison of centenarians and non-centenarians in a 35-year follow-up of the Swedish AMORIS cohort" https://link.springer.com/article/10.1007/s11357-023-00936-w 10:21 < hprmbridge> kanzure> "A genetic circuit on a single DNA molecule as an autonomous dissipative nanodevice" https://www.nature.com/articles/s41467-024-45186-2 10:35 < hprmbridge> kanzure> "Identification of pharmacological inducers of a reversible hypometabolic state for whole organ preservation" https://elifesciences.org/reviewed-preprints/93796 11:58 < hprmbridge> kanzure> https://www.asimov.press/p/synbio-guide 13:01 < kanzure> bitcoin-dev mailing list migration announcement and info: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2024-February/022327.html https://gnusha.org/pi/bitcoindev/CABaSBaxDjj6ySBx4v+rmpfrw4pE9b=JZJPzPQj_ZUiBg1HGFyA@mail.gmail.com/ https://twitter.com/kanzure/status/1756055826723217659 -> https://groups.google.com/g/bitcoindev/ 14:50 < hprmbridge> kanzure> https://cdn.discordapp.com/attachments/1064664282450628710/1205646832026394684/GF6sANqXEAAATf9.png?ex=65d9211c&is=65c6ac1c&hm=28b1bfaa18b1925dd1f5e7a93e991e616408c127aa4f51a14e06523734579d12& 14:53 < kanzure> https://teachprivacy.com/why-privacy-matters-an-interview-with-neil-richards/ 15:21 -!- alethkit [23bd17ddc6@sourcehut/user/alethkit] has quit [Ping timeout: 256 seconds] 15:23 -!- alethkit [23bd17ddc6@sourcehut/user/alethkit] has joined #hplusroadmap 15:38 -!- alethkit [23bd17ddc6@sourcehut/user/alethkit] has quit [Ping timeout: 255 seconds] 15:40 -!- Goober_patrol66 [~Gooberpat@2603-8080-4540-7cfb-0000-0000-0000-113a.res6.spectrum.com] has quit [Quit: Leaving] 15:45 < fenn> i want to joke about it, but are there really people with $7 trillion to spend? 15:53 < L29Ah> dollars are cheap, hardware manufacturing capacity would be a much more pressing concern 15:55 < hprmbridge> kanzure> this is your competition https://twitter.com/rob_carlson/status/1756091625271599289 16:01 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 16:14 -!- alethkit [23bd17ddc6@sourcehut/user/alethkit] has joined #hplusroadmap 16:16 < fenn> i thought the tweet was going to be about bio-engineered organoids for AI 16:16 < fenn> we could train organoids and then distill their output logits (or whatever it is) into a digital neural network 16:17 < fenn> even though there's no direct readout of the network weights 16:17 < hprmbridge> kanzure> is the argument that biological learning is faster and cheaper, but not for general execution after training? 16:18 < fenn> also seems like a good starter product for scaling up DNA barcoding for connectomics 16:18 < fenn> biological stuff dies and then you have to re-train it. also you can't work out the bugs on one organoid and transfer that knowledge to another. it doesn't scale 16:19 < kanzure> you can use machine learning and genome synthesis (or genome editing) to grind towards a genome that produces a biological neural network that once trained has more legible connection weights 16:19 < fenn> i also want a pony 16:19 < kanzure> such as by labeling the connection weights 16:19 < kanzure> ok then you are talking about something else..? 16:19 < fenn> you mean like brainbow stuff? 16:20 < fenn> literally colorful synapse weights? 16:20 < kanzure> whatever; we don't even know if receptor density measurements are what we need to replicate biological execution in hardware- but with a tight feedback loop you can just have the computers figure it out by modifying the biology until there's a signal that they are monitoring and using for feedback 16:20 < fenn> how do you know you have successfully emulated the bio network? 16:21 < kanzure> the simulated or emulated version matches the behavior of the physical one (physical reservoire computing stuff goes here) 16:21 < kanzure> just faster in silico 16:21 < fenn> not necessarily faster 16:22 < kanzure> you've lost me. so not for performance but for longevity you want weight transferance from in vivo to in silico? 16:22 < kanzure> s/longevity/replicability 16:23 < fenn> silicon is more parallelizable i guess; for large networks on current GPU architectures with memory separate from compute, the cost of moving bits around swamps everything else and even a 20Hz brain is faster 16:24 < fenn> it costs about the same to run 1 mind as 16 minds, and you get diminishing returns around a batch size of 64 16:24 < kanzure> what do you mean "even though" there's no direct readout of network weights? how did you get to "distill their output logits" if there's no weight readout? 16:25 < fenn> a blob of flesh on an electrode array with N electrodes has N "output neurons" which we can compare to the input and output layers of a neural network 16:26 < fenn> if we train the flesh to output probabilities of the next word, say, that information can be used to efficiently train a neural network to think the same way 16:26 < fenn> we aren't looking at it under a microscope at all 16:27 < fenn> this technique is being used already for transferring knowledge between language models and image generation models 16:27 < fenn> er, between models of the same type, including such types as language and image 16:27 < kanzure> idea is biological tissue would learn faster/more cheaply? because training is expensive compute/energy when in silico? 16:27 < fenn> right 16:28 < kanzure> and transferance is through repeated querying? i haven't been following the LLM stealing prompt stuff 16:28 < fenn> also maybe we can replicate the thinking pattern by training another blob of flesh to reproduce the probability distributions 16:29 < kanzure> (the thing where you query an LLM a bunch of times and reconstruct its weights in another model) 16:29 < fenn> going directly from flesh to flesh might work, i have to think about it 16:29 < fenn> yeah that is what i am talking about with "distillation" 16:29 < kanzure> what is the term of art for that? 16:29 < fenn> it's called that because usually you go from a large model to a small model 16:30 < fenn> "model extraction" 16:33 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 16:55 < L29Ah> https://en.wikipedia.org/wiki/James_Magnussen#2024 17:02 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 260 seconds] 17:04 -!- BEXCHA is now known as autopilot 17:04 < fenn> "In February 2024, Magnussen announced that he would be coming out of retirement to take part in the Enhanced Games. Magnussen will be paid $1 million if he breaks the men’s 50m freestyle record" 17:10 < fenn> is there some kind of mutual snitching attractor that socially excludes athletes who talk about enhancement? 17:11 < fenn> is it just wikipedia's editor bias? surely there must be someone willing to speak in favor of the enhanced games 17:14 < fenn> growth hormone mutant? you're the lucky winner! taking growth hormones? that's unfair and immoral~ 17:17 < fenn> either way you have to be absurdly dedicated to actually win 17:17 < fenn> the way i see it, freedom of substrate actually levels the playing field. no more genetic lottery 17:18 < fenn> i prefer unlimited engineering competitions like pikes peak vs over-regulated f1 or god forbid nascar 17:19 < fenn> otherwise what are we even doing? starting out with the conclusion, then performatively executing the foregone conclusion 17:23 < fenn> in 1968 a creative engineering team swapped out their old piston engine for a gas turbine helicopter engine. it was so good it immediately won all the races and was banned. we're now stuck with the piston engine technology in this supposedly elite but actually second rate league 17:25 < fenn> in 1978 another creative team used a fan to generate extra downforce, keeping the car stuck to the track even at low speeds. the car immediately won all the races and was banned. we're now stuck with passive aerodynamics 17:26 < L29Ah> same thing with cycling 17:26 < L29Ah> there are devices that reach 130kph on a horizontal track by human power alone 17:28 < L29Ah> one could then argue for allowing/banning energy capacitors tho 17:29 < fenn> ok but surely there should at least be one sports league that allows all the silly things 17:30 < fenn> like omg what if they allowed turbochargers on race cars, then all cars would have to use turbochargers! 17:30 < fenn> only naturally aspirated engines are safe 17:30 < fenn> the turbine might explode and kill someone with high speed shrapnel 17:30 < fenn> see how stupid that sounds 17:30 < fenn> but the line is completely arbitrary 17:32 < fenn> WADA's decisions on which substances to allow are not based on risk, and they don't even pretend to quantify the risk of "natural" athletics 17:33 < fenn> the stuff modern athletes do is so far removed from even 50 years ago, that it wouldn't be a fair competition if you could time travel 17:33 < fenn> it's not "just" that there's a larger pool to select from 17:34 < fenn> science has progressed, but some science has been banned 17:44 < kanzure> fenn: the silicon adherents would argue that in silico training is faster than biological neural network training and therefore your in vitro tissue or organoid culture scheme would not be advantageous 17:46 < kanzure> makes sense that pro-transhumanism athletes would be the ones that have already retired from the regular athletic careers and have no concerns about losing out on other competitions by participating in pro-steroid sports 17:49 < fenn> participating in enhanced games would be career suicide for any current pro athlete 17:49 < fenn> this is literally a rebellion against the IOC 17:49 < kanzure> does WADA have a chokehold on pro skating? 17:50 < fenn> no 17:50 < kanzure> but WADA infiltrated chess 17:50 < kanzure> can't explain that 17:50 < fenn> when did that happen 17:51 < fenn> skaters are much more distrustful of authority and drug testing than most cultures 17:51 < kanzure> https://www.fide.com/FIDE/handbook/WADA%20Anti%20Doping.pdf 17:51 < kanzure> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5978818 17:51 < kanzure> "According to the Chess WADA Anti-Doping Policy, the most relevant banned substances for chess are amphetamine derivatives (Adderall, Ritalin), ephedrine and methylephedrine, pseudoephedrine, and Modafinil. Notably, caffeine and codeine are not prohibited, but figure in the Monitoring Program." 17:52 < kanzure> looooool 17:52 < kanzure> machines are crusihng our best chess players and we're busy trying to cap our best chess players by monitoring their caffeine intake 17:53 < kanzure> "look these coffee beans natively have dextroamphetamine okay?" 17:54 < fenn> i can't even find anything about doping in skatboarding 17:55 < kanzure> olympic skating is subject to WADA.. makes sense. 17:55 < kanzure> "With skateboarding's inclusion in the Olympics starting from Tokyo 2020, athletes competing in these events must comply with WADA's anti-doping rules. This means undergoing drug testing and adhering to the list of prohibited substances. Olympic skateboarding competitions are governed by the International Skateboarding Federation (ISF) or World Skate, which have adopted anti-doping policies in ... 17:55 < kanzure> ...line with WADA's standards." 17:56 < kanzure> "Professional skateboarding has historically been characterized by its countercultural ethos, which includes a relaxed attitude towards substance use." 17:56 < kanzure> guess it didn't totally escape WADA. 17:56 < fenn> yes but world skate is all kinds of different activities involving wheels or blades 17:57 < kanzure> "... organized by specific communities or organizations dedicated to extreme pogo or "Xpogo," and these organizations may not have strict anti-doping policies in place" 17:57 < fenn> also japanese skaters have very different culture 17:59 < kanzure> perhaps a privacy-focused enhancement olympics could work, where you keep everything private and anonymous 17:59 < fenn> lol yeah right 17:59 < fenn> homomorphically encrypted athletics competitions 17:59 < kanzure> "and here's the 100m guy fawkes dash" 18:00 < fenn> "DO YOU NOT HAVE A MOLE ON YOUR LEFT ANKLE?" 18:01 < hprmbridge> kanzure> https://cdn.discordapp.com/attachments/1064664282450628710/1205694945407533136/Ftm8phPWABUbz5h.png?ex=65d94deb&is=65c6d8eb&hm=eb226253b3f33e3e086dccc3391a070a61948a713c112b29ec774cc02e684702& 18:01 < fenn> i would be interested in whatever substances skaters have found improves dexterity, balance, coordination, timing, etc. 18:01 < fenn> it's mostly not about strength or endurance 18:02 < fenn> also there's some cognitive capability involving trajectories through state space, which i don't know how to name 18:02 < kanzure> hopefully it's not completely bruteforced muscle memory 18:03 < fenn> it's not 18:03 < fenn> practice is key of course, but some people just learn to skate faster than others 18:05 < kanzure> what argument is there that biological training would be faster? i think i saw something like amount of data exposure in common crawl (or whatever data sets they are using for 150b models?) is several orders of magnitude larger than human child exposure 18:05 < kanzure> but, human child exposure is massive just from retinal input... so... 18:05 < fenn> baby chickens learn at about the same rate as visual transformers 18:05 < fenn> uh, sample efficiency 18:07 < fenn> this whole thing is relevant https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know 18:08 < fenn> it's not that biology is inherently superior, it's that the architecture of current silicon designs is not suited to the task 18:08 < fenn> you need memory dispersed within the parallel compute nodes, the more dispersed the better 18:08 < fenn> biology takes it to the limit where the memory IS the compute 18:09 < fenn> a 1:1 allocation 18:09 < kanzure> hm, this link does not talk about learning rates 18:10 < kanzure> ok maybe under vision 18:10 < fenn> https://arxiv.org/abs/2312.02843 baby chicken sample efficiency vs vision transformer 18:11 < fenn> spoiler: they're the same within an order of magnitude 18:11 < kanzure> .wik human speechome project 18:11 < saxo> "The Human Speechome Project ('speechome' as an approximate rhyme for 'genome') is an effort to closely observe and model the language acquisition of a child over the first three years of life. / The project was conducted at the Massachusetts Institute of Technology's Media [...]" - https://en.wikipedia.org/wiki/Human_speechome_project 18:11 < EmmyNoether> "The Human Speechome Project ('speechome' as an approximate rhyme for 'genome') is an effort to closely observe and model the language acquisition of a child over the first three years of life. / The project was conducted at the Massachusetts Institute of Technology's Media [...]" - https://en.wikipedia.org/wiki/Human_speechome_project 18:11 < kanzure> nsh: bots are fighting with each other again 18:12 < kanzure> "and transcribing significant speech added a labor-intensive dimension" this needs a whisper update 18:13 < kanzure> something something podcast speech anonymization filters 18:14 < fenn> baby babble is not going to work with whisper 18:14 < kanzure> "The resulting corpus, which already contains over 100,000 hours of multi-track recordings, constitutes the most comprehensive record of a child's development made to date." 18:14 < fenn> wait, they only have a single data point? 18:14 < fenn> wtf 18:14 < fenn> literally anyone could do this at any time 18:15 < kanzure> no follow-up publications since 2012 https://www.media.mit.edu/cogmac/projects/hsp.html 18:15 < fenn> strap a cellphone to a baby. done 18:15 < fenn> jesus christ what are the other 8 billion people doing 18:15 < kanzure> so parents already do that 18:16 < kanzure> you could probably just take an average across a few million children that are (already) placed in front of phones all day 18:16 < fenn> what makes "the human speechome" special then? 18:16 < kanzure> oh, AFAIK this is the only project that actually did it 18:16 < fenn> the cellphone has to be turned on and recording audio, of course 18:16 < kanzure> i just mean that you don't have to convince anyone to strap a smartphone to a baby--- they do it naturally (thus the debates about screen time) 18:17 < fenn> presumably this must be done earlier than when babies are given devices 18:17 < fenn> like a newborn can't even move in a coordinated fashion, what's it going to do with a phone 18:17 < kanzure> i suspect the american pediatric association recommends no screen time before 2 years of age specifically because parents were giving children screen time before 2 years of age 18:18 < fenn> they had to be seen giving some sort of recommendation 18:18 < kanzure> sorta dark i guess... 18:18 < fenn> the alternative is colorful things hanging from strings over the crib 18:19 < kanzure> kinda weird that this speechome project doesn't even have a stats page 18:19 < kanzure> how many hours of actual speech? 18:19 < kanzure> how many words were exposed? what was the entropy? what was the word learning rate for this subject? 18:20 < kanzure> and then compare learning rate and language acquisition rate against LLMs or something (or whisper?) 18:22 < fenn> that's basically what they did with the chickens 18:24 < fenn> i think the data quality and ... curriculum preparation (if there even is one) is much worse for a typical 2021 LLM than for a human 18:24 < kanzure> i don't think human curriculums are likely well organized, it's probably 99% just some variation of immersion learning 18:24 < kanzure> for first 1-2 years 18:24 < fenn> they're treating it like a big blender when there is evidence that LLM learn faster from simplified synthetic data sets 18:25 < kanzure> okay but nobody has figured out simplified human synthetic data sets AFAIK 18:25 < kanzure> it's like making contact with an alien species and you want to transmit prime numbers except the party on the other end of the line isn't cognitively developed yet 18:26 < fenn> so obviously the next step is presenting the training data in a structured order right? but somehow the sequel to "tiny stories" (synthetic data) was not "tiny computer school" but rather "let's generate random python code examples" 18:26 < kanzure> i forget what this is called. there was some scheme someone devised. you start by transmitting numbers and primes, and then make up arithmetic from there, and then you introduce words somehow. was this from a movie or was it a real thing attached to the VGER probe? 18:27 < fenn> contact had something like that. the NSA had a decryption challenge along those lines, and it's been done to death in science fiction 18:27 < fenn> the pioneer record had some math stuff iirc 18:27 < fenn> the story of your life 18:28 < kanzure> SETI's messaging program (METI) isn't the right fit because there's no interaction between the two parties, it's just one way transmission 18:28 < fenn> grocery store's about to close, i'll bbiab 18:29 < kanzure> "Algorithmic communication systems are a relatively new field within CETI. In these systems, which build upon early work on mathematical languages, the sender describes a small set of mathematic and logic symbols that form the basis for a rudimentary programming language that the recipient can run on a virtual machine." 18:29 < kanzure> that doesn't make sense. why would the recipient have a compatible virtual machine? 18:30 < kanzure> "Logic Gate Matrices (a.k.a. LGM), developed by Brian McConnell, describes a universal virtual machine that is constructed by connecting coordinates in an n-dimensional space via mathematics and logic operations, for example: (1,0,0) <-- (OR (0,0,1) (0,0,2)). Using this method, one may describe an arbitrarily complex computing substrate as well as the instructions to be executed on it." 18:31 < kanzure> "Astrolinguistics: Design of a Linguistic System for Interstellar Communication Based on Logic" or "lingua cosmica astrolinguistics" 18:32 < kanzure> .wik astrolinguistics 18:32 < saxo> "Astrolinguistics is a field of linguistics connected with the search for extraterrestrial intelligence (SETI). / Arguably the first attempt to construct a language for interplanetary communication was the AO language created by the anarchist philosopher Wolf Gordin (brother [...]" - https://en.wikipedia.org/wiki/Astrolinguistics 18:32 < EmmyNoether> "Astrolinguistics is a field of linguistics connected with the search for extraterrestrial intelligence (SETI). / Arguably the first attempt to construct a language for interplanetary communication was the AO language created by the anarchist philosopher Wolf Gordin (brother [...]" - https://en.wikipedia.org/wiki/Astrolinguistics 18:33 < kanzure> https://cosmicos.github.io/ 18:36 < fenn> .wik solomonoff induction 18:36 < saxo> "Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science." - https://en.wikipedia.org/wiki/Solomonoff_induction 18:36 < EmmyNoether> "Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science." - https://en.wikipedia.org/wiki/Solomonoff_induction 18:38 < kanzure> a variation of this work that allows for two-way interactive communication (eg testing and dynamic feedback adjustment) would be interesting and could be applied to cross-language human communication, human-animal communication, and machine learning model extraction 18:39 < kanzure> "CosmicOS is a way to create messages suitable for communication across large gulfs of time and space. It is inspired by Hans Freudenthal's language, Lincos, and Carl Sagan's book, Contact. CosmicOS, at its core, is a programming language, capable of expressing simulations. Simulations are a way to talk, by anology, about the real thing they model." 18:42 < kanzure> unfortunately this (cosmicos) requires too much abstract study by the recipient. language immersion training is usually highly interactive and has feedback and does not require logical analysis or careful intentional mental simulation. 19:44 < fenn> it turns out i had an extra hour before they closed 19:44 < fenn> me grunting about solomonoff induction was attempting to point at a method by which alien intelligences could arrive at a minimal shared virtual machine 19:45 < fenn> and not be overfit to the data in the transmission 19:47 < fenn> we share not only the transmitted bits themselves, but also the rest of the observable universe. that ought to narrow down the search space of compatible virtual machines by quite a lot 19:49 < fenn> if we're being honest, even without any special effort on the sender's part, any alien intelligence worth talking to ought to be able to figure out the meaning of most symbols from context and shared observation of natural phenomena 19:49 < fenn> math language is baby-speak for aliens i guess 19:55 < fenn> i would be cautious trying to use this for anything beyond bootstrapping, because it's not going to be a system widely used and tested by humans and probably contains a lot of bugs and stupid mistakes just due to being an obscure topic without much history (not enough eyes to find the shallow bugs) 20:12 < fenn> kinda surprised examine.com has nothing about shiitake 20:16 < hprmbridge> Eli> Examine doesn’t have enough people helping out. Even their well documented stuff is missing a lot of papers. Somehow they need to automate the process 20:17 < fenn> gentlemen, we have the technology 20:18 < fenn> examine.com is a for-profit company that doesn't accept outside contributions 20:18 < fenn> i like the concept; i might even be willing to pay for the service. someone needs to be doing quality control of supplements and claims about supplements, and i don't trust the FDA with this task 20:19 < fenn> just an independent "certified not bullshit" stamp 20:19 < fenn> but in general there just isn't enough research to draw from in the first place, and nobody is willing to fund it 20:20 < fenn> maybe we could tax clinical trials, since they already are throwing zillions of dollars at them 20:20 < fenn> logically it makes sense to find new uses for substances that have already been proven to be safe, thus bypassing the need for safety trials 20:21 < fenn> i don't see a need for efficacy trials before a drug can be distributed 20:47 -!- mxz [~mxz@user/mxz] has quit [Ping timeout: 272 seconds] 20:56 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has joined #hplusroadmap 21:00 -!- Ashstar [~Ashstar@mobile-166-171-251-240.mycingular.net] has joined #hplusroadmap 21:00 < Ashstar> https://link.springer.com/article/10.1007/s11357-023-00936-w 21:01 < Ashstar> Blood biomarker profiles and exceptional longevity: comparison of centenarians and non-centenarians in a 35-year follow-up of the Swedish AMORIS cohort | GeroScience 22:52 -!- mxz [~mxz@user/mxz] has joined #hplusroadmap --- Log closed Sat Feb 10 00:00:54 2024