--- Log opened Wed Oct 11 00:00:03 2023 --- Day changed Wed Oct 11 2023 00:00 < sphertext_> oh actually it did get orphan drug status, in particular for DiGeorge syndrome, which is characterised by absence of thymus 00:38 < sphertext_> ok https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9363815/ 00:38 < sphertext_> .t 00:39 < EmmyNoether> A Double-blind Multicenter Two-arm Randomized Placebo-controlled Phase-III Clinical Study to Evaluate the Effectiveness and Safety of Thymosin α1 as an Add-on Treatment to Existing Standard of Care Treatment in Moderate-to-severe COVID-19 Patients - PMC 00:39 < sphertext_> this is as legit as it gets 00:39 < fenn> but it's covid research so it's useless 00:40 < sphertext_> US study, non-affiliated, results satisfied expectations. main problem: small sample size 00:40 < sphertext_> fenn wdym 00:41 < fenn> there's a much higher tolerance for risk in people dying of a respiratory disease (or there should be anyway) 00:42 < sphertext_> fenn im not following 00:42 < fenn> how do you tell apart the effects of covid vs the effects of the treatment? 00:42 < fenn> i'm assuming your goal is not to treat covid 00:42 < sphertext_> covid is an infectious disease.. ta1 upregulates the immune system 00:43 < sphertext_> if ta1 shows improved outcomes in covid infections, that supports the hypothesis 00:44 < fenn> why do you want to upregulate the immune system? 00:44 < sphertext_> because the immune system gets fucked as you age and it's one of the main reasons we get cancer, increased incidence of infections and accumulating senescent cells 00:45 < fenn> ok so do you see the problem yet 00:45 < sphertext_> i dont 00:45 < fenn> it's not a cancer study 00:45 < sphertext_> it's an infection study 00:46 < sphertext_> in fact, it's an immunodeficiency study 00:46 < sphertext_> i get what you mean, it's not _direct_ evidence as a _rejuvenation_/lifespan therapy. but as indirect evidence, it is _supportive_ 00:47 < sphertext_> with those things, what you want to find is: solid evidence that it is safe; supportive/indirect evidence that it might benefit longevity 00:48 < sphertext_> because that's the best you'll ever get 00:48 < fenn> but this is like saying "look, they use PFOA firefighting chemicals to put out burning airplanes, therefore it's safe" 00:51 < sphertext_> well sick ppl are usually more sensitive to side effects, because their systems are compromised and can't maintain homeostasis. so they will actually overreact to interventions 00:52 < sphertext_> you usually end up having to downplay the statistical increase in side effects observed in sick cohorts, because you should assume a healthy person is much more resilient 00:53 < sphertext_> and phase I trials are always done in healthy volunteers. ta1 has reached II and III for decades 00:54 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 00:55 < fenn> maybe there's no monopoly incentive for dumping a billion dollars into phase II trials 00:55 < fenn> phase III* 00:55 < fenn> it's a bioidentical, so you can't patent it, right? 00:56 < sphertext_> it's definitely not patented, although there is only 1 pharma company left who manufactures it 00:56 < sphertext_> also, it's a peptide, and you can't patent peptides at all 00:56 < fenn> doubt 00:56 < sphertext_> pretty sure 00:58 < sphertext_> ok, yeah. you can patent it if it doesn't occur naturally inside the body 00:58 < sphertext_> you're right 01:02 < sphertext_> so it did have some patents over it back in 2000 https://europepmc.org/article/med/12090542 01:02 < sphertext_> also, it seems it was mostly approved in 3rd world countries 01:03 < sphertext_> ah man, such a mixed bag 01:04 < sphertext_> also: one study found that in very severe COVID cases it actually increased mortality; and another study found that prophylactic use in healthy medical staff did not prevent COVID infections. 01:06 < fenn> supposedly a lot of covid lung damage is caused by immune overreaction 01:06 < sphertext_> it's complicated, in general covid causes immunosuppression 01:06 < sphertext_> but also cytokine storms 01:15 < sphertext_> Italy and Singapore seem to be most high profile countries who approved ta1 01:17 < sphertext_> and the only reference to a trial in healthy volunteers. it's from 1989 so I could only find this quote "In an interesting clinical trial in healthy volunteers 65-101 years of age, Tal was studied as an adjunct to boost the efficacy of the influenza vaccine. Tal was found to significantly increase the percentage of responders to the vaccine and reduce the incidence of serologically confirmed influenza attack rates in elderly subjects above 01:17 < sphertext_> the age of 80 years" 01:28 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 01:28 -!- sphertext_ [~sphertext@user/sphertext] has quit [Ping timeout: 248 seconds] 01:30 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Remote host closed the connection] 01:32 -!- sphertext_ [~sphertext@user/sphertext] has joined #hplusroadmap 04:02 -!- Llamamoe [~Llamamoe@46.204.77.176] has joined #hplusroadmap 04:51 -!- Llamamoe [~Llamamoe@46.204.77.176] has quit [Quit: Leaving.] 04:51 -!- diamond [~user@89.223.35.3] has joined #hplusroadmap 08:30 -!- diamond [~user@89.223.35.3] has quit [Ping timeout: 255 seconds] 09:59 -!- catalase [catalase@freebnc.bnc4you.xyz] has joined #hplusroadmap 10:06 -!- catalase [catalase@freebnc.bnc4you.xyz] has quit [Remote host closed the connection] 10:21 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 10:58 < kanzure> mistral 7b paper https://arxiv.org/abs/2310.06825 10:58 < justanotheruser> it's a bit lacking don't you think? 10:58 < docl> what's the best 7b model these days? 10:59 < justanotheruser> possibly zephyr, a finetuned mistral 7b 10:59 < justanotheruser> always can check the hf leaderboard https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 11:02 < docl> I'm currently using the xwin one along with this frontend https://github.com/oobabooga/text-generation-webui 11:06 < justanotheruser> hmm, an llm optimized for alpacaeval 11:07 < justanotheruser> I question how many of the llms on the top of the leaderboard with 3 hearts are just trained on the eval data 12:26 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 252 seconds] 12:54 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 13:03 < muurkha> heh, zephyr 13:42 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 13:46 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 255 seconds] 14:08 -!- yashgaroth [~ffffffff@2605:a601:a0e0:8f00:904a:9b95:e539:aada] has joined #hplusroadmap 14:19 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 14:19 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 14:21 < TMA> what are you using LLMs for? 14:26 < TMA> my experience so far was that the result was (1) completely off-topic or (2) pure hallucination 14:28 < TMA> and that putting some of the keywords from the prompt into ddg and skimming the blurbs on the first five or so results is a more robust way to usefulness 14:45 < sphertext> curious to hear answers too, as personally i haven't found the enthusiasm relatable 14:46 < superkuh> For recreation on IRC. It's cool to have a bullshit generator AI. gpt3.5 is also pretty amazing are generating perl regex. 14:59 < kanzure> gpt3.5 is also good at quick definitions and other queries 14:59 -!- ike8 [12fdf2ee08@irc.cheogram.com] has quit [Ping timeout: 272 seconds] 15:00 < sphertext> and it doesn't concern you that it will occasionally confabulate without warning? 15:01 < kanzure> i dunno, what is my own confabulation rate? 15:01 < sphertext> to me that is sort of like the cardinal sin of computing 15:02 < sphertext> well, impossibility to debug is a close second 15:09 < fenn> i'm sure we can add in a few more, like deception and subscription based pricing 15:11 < sphertext> i feel like as a culture we have given up on the old convention that diligent labour will be rewarded economically. so ppl instead turn to black box gambling machines, hoping that their genius breakthrough will come from the next prompt, or the next crypto shitcoin 15:19 < fenn> it wasn't just a convention; historically that was how wealth was created 15:19 < fenn> it's still true to some extent, but luck plays a much larger role 15:20 < fenn> or rather, arranging to have been in the right place at the right time 15:21 < fenn> and LLMs are useful, regardless what you think 15:36 -!- ike8 [12fdf2ee08@irc.cheogram.com] has joined #hplusroadmap 15:43 < ike8> Off topic; still looking at KDS-2010. Their reversibility test is fascinating. They call it "centrifugation-ultrafiltration" washing. They made a solution of MAO-B with docked KDS-2010 and used a centrifuge to force the solution though a filter that separates MAO-B from KDS-2010. MAO-B had over 80% recovery in activity after three washes. 15:49 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 252 seconds] 16:18 -!- sphertext_ [~sphertext@user/sphertext] has quit [Ping timeout: 240 seconds] 16:18 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 16:20 -!- sphertext_ [~sphertext@user/sphertext] has joined #hplusroadmap 16:21 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 17:24 < hprmbridge> nmz787> Gpt generated code can be tested... sometimes it's easier to write tests than an implementation 17:25 < sphertext> ike8 what is docked? 17:26 < ike8> attached. that is the word they used in the paper 17:26 < sphertext> @nmz787 ok but in those special cases where it's easier to write tests than an actual implementation, how often does gpt actually produce correct code? 17:27 < sphertext> ike8 i see. so this is in contrast to selegiline, which actually destroys the enzyme 17:28 < sphertext> how long does the KDS-2010 inhibition last? 17:30 < sphertext> this suggests the compensation in DAO is triggered not by a lack of MAOB activity per se, but rather by a lack of the enzyme's presence. so if the enzyme is present but rendered inactive, the compensatory mechanism isn't triggered. they should investigate this 17:30 < sphertext> also are there other reversible MAOB inhibitors, and how does KDS2010 compare to them? 17:31 < sphertext> you'd expect other inhibitors of MAOB activity that don't destroy the enzyme to have a similar effect profile 17:37 < hprmbridge> nmz787> Spheretext it's more like reviewing code, sometimes it's easy to spot minor errors and correct them, other times you end up having to do the work 17:37 < hprmbridge> nmz787> For stuff just past being simple, like regexes, or linux command syntax, it's pretty darn good 17:38 < hprmbridge> nmz787> Faster than reading 5 stackoverflow posts, reading the man page that's confusing you 17:39 < sphertext> nmz787 if you plan to continue using CLI tools, then reading man pages saves you a lot of time in the long term 17:40 < sphertext> also – even if you write tests, do you really feel comfortable running generative code on your machine, which you personally don't understand, and which hasn't been reviewed by a third-party? 17:41 < sphertext> ike8 seems like only Safinamide is a reversible MAOB inhibitor currently used in clinical practice. but it has somewhat promiscuous effects, including sigma receptor activation and ion channel blocking. 17:43 < sphertext> as well as glutamate inhibition. however, preclinical tests showed neuroprotective effects, and perhaps those additional properties contribute to that 17:45 < sphertext> https://www.frontiersin.org/files/Articles/209773/fphar-07-00340-HTML/image_m/fphar-07-00340-t001.jpg 17:47 < sphertext> it would be very interesting to see what Safinamide does to reactive astrocytes who would be normally upregulating DAO to continue secreting toxic GABA near amyloid plaques 17:47 < sphertext> someone give me a lab and funding to test this :( 17:48 < hprmbridge> nmz787> Spheretext I'm a professional programmer, I review all the code I get from gpt, it's not that I don't understand it, rather I just want to save time putting the pieces together myself if possible 17:48 < ike8> > someone give me a lab and funding to test this :( 17:48 < ike8> when I'm rich 17:48 < hprmbridge> nmz787> And sometimes man pages just don't make sense, or you need something RIGHT NOW 17:52 < sphertext> nmz787 hence my question, does it actually save you time, specifically in those scenarios when the problem is so complex, that writing tests is easier than writing an implementation? or do you end up having to patch the generated code it anyway, because in those scenarios, the code is broken more often than not. in that case you end up having to write both the tests and part of the implementation; and the latter as a reactive patch on an 17:52 < sphertext> unknown codebase, which comes with additional cognitive load, compared to starting from scratch. so – do you actually save time? 17:53 < sphertext> also if man pages don't make sense, chances are stackoverflow doesn't have good answers either, so chatgpt has no training for your question 17:59 < ike8> there is something to be said about the pure pleasure of architecting code yourself, even if it takes a little longer. 18:01 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 258 seconds] 18:06 < sphertext> ike8 i think KDS2010 is likely going to be approved for dementia. it's definitely interesting and fills an important gap in reversible, MAOB inhibitors. but i would wait just a bit longer to see what other activities it has (if any). in the mean time, if i really wanted to try something like that, i would maybe get safinamide, because it's a bit older, the effect is similar and the safety profile appears to be very good 18:11 < fenn> sphertext have you actually used GPT-4? 18:12 < sphertext> nope. i couldn't figure out a use case 18:12 < fenn> i mean have you tried it at all 18:13 < sphertext> no 18:13 < fenn> ok, well, it's waaaay better than GPT-3.5 18:14 < sphertext> does it actually address the specific concerns i have? 18:14 < fenn> often it knows about approaches to solve the problem that are simpler and better than the way i would have done it 18:14 < fenn> it really depends on the problem 18:15 < fenn> i think there's a conflict of interest in giving honest answers about how much of a help it is 18:15 < fenn> you sound like n00b or not competent if you say it's a better programmer or makes better choices 18:16 < fenn> so only truly n00b incompetents are going to blurt that out 18:16 < fenn> but everybody has off days... 18:17 < sphertext> well i was pretty concrete in the issues i see with it 18:17 < sphertext> i didn't draw stereotypes or question competence 18:17 < sphertext> and those issues are a deal breaker to me 18:17 < fenn> and you trust the text on the internet? 18:17 < fenn> even scientists are wrong 18:18 < fenn> i'm sorry but your "deal breaker" doesn't make logical sense 18:18 < sphertext> well, it's a spectrum. code i wrote > code that has been reviewed by a third party > generative code that i don't understand and hasn't been reviewed 18:19 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 18:19 < fenn> what happens when you run the magic black box and it does the thing you actually wanted? is that bad? 18:19 < fenn> are you concerned that the generated code you don't understand is going to wreck your system? 18:20 < sphertext> even a broken clock is right twice a day. is that good or bad? 18:20 < sphertext> yes, in fact, i am concerned about that. 18:21 < fenn> ok then you just have to put in a little more effort to understand it first, instead of blindly running the output 18:21 < fenn> and you can ask "how does this work" 18:22 < fenn> in fact it writes better code if you ask it to explain how it works first 18:22 < fenn> also writing code is not their core competency 18:22 < sphertext> then we come back to my other question. how often does it actually save you time overall, compared to writing the code yourself? considering that you need to write tests + review unknown code, which comes with additional cognitive load compared to writing it from scratch. 18:22 < fenn> they're overgrown language translators really 18:23 -!- justanot1 [~justanoth@gateway/tor-sasl/justanotheruser] has joined #hplusroadmap 18:23 < fenn> it saves cognitive effort 18:23 < sphertext> well, https://deepl.com is an older implementation of ML for language translation compared to chatgpt, and it's absolutely phenomenal and in fact groundbreaking 18:23 -!- justanotheruser [~justanoth@gateway/tor-sasl/justanotheruser] has quit [Ping timeout: 252 seconds] 18:25 < fenn> DeepL is a transformer 18:31 < sphertext> there's something intuitive that seems wrong about having to review machine output. to me, a machine is deterministic and debuggable. perhaps im too much of a traditionalist when it comes to those things 18:39 < fenn> maybe you should stop thinking of it as a machine then 18:39 < fenn> pretend it's a genetically engineered hamster with wires shoved into its tiny brain 18:40 < fenn> a hamster with very good spelling and punctuation 18:41 < sphertext> mental gymnastics to rationalise shitty tech 18:48 < fenn> we've had deterministic predictable provable symbol manipulators for decades, but they fail spectacularly when confronted with the real world 18:50 < sphertext> but so far, chatgpt fails too, whenever it confabulates 18:50 < fenn> imagine even trying to write this in code https://nitter.net/pic/orig/media%2FF8CiwdbbMAA56Gh.jpg 18:52 < fenn> it would be possible for the most part, but you'd spend a month finding and integrating all the necessary databases 18:52 < sphertext> and yet https://media.discordapp.net/attachments/560461575136346115/1161796268188766218/IMG_F65BDB4371D6-1.jpeg?width=814&height=808 18:54 < fenn> well, it WAS instructed to not alter memes and the prompt's intent 18:56 < fenn> really they should just tell it "don't do anything that would make us, OpenAI, look bad in the press" 18:57 -!- yashgaroth [~ffffffff@2605:a601:a0e0:8f00:904a:9b95:e539:aada] has quit [Quit: Leaving] 19:06 < fenn> GPT-4 punted at first, saying it depends on the intent behind the request, but when told about the meme it said "No, it's not appropriate to generate the image as you described. The Piper Perri meme, when applied to real-world conflicts or sensitive cultural situations, can be seen as disrespectful or offensive." 19:19 < hprmbridge> nmz787> Spheretext, it's been hit or miss as to whether it saves time... trending on the hit side tho 19:31 -!- sphertext_ [~sphertext@user/sphertext] has quit [Ping timeout: 272 seconds] 19:35 -!- sphertext_ [~sphertext@user/sphertext] has joined #hplusroadmap 19:36 < hprmbridge> kanzure> sama now directly teasing eliezer or something https://twitter.com/sama ("eliezer yudkowsky fan fiction account") 19:57 < fenn> AGI confirmed 20:00 < sphertext> gotta love it when you have to measure the pixels on a graph in order to get the actual damn data in numerical format 20:00 < sphertext> when reading a research paper, that is 20:02 < fenn> chatgpt can do that :P 20:02 < hprmbridge> kanzure> or kurzweil chart..... 20:03 < sphertext> ppl should really qualify their chatgpt claims with ", sometimes (and you don't really know when it did or didn't do it right)" 20:13 -!- test__ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 20:13 < sphertext> In mice, Ta1 was able to recover thymus index after damage from 0.44 to 1.75 mg/g (once daily injection for 14 days https://doi.org/10.1016/j.canlet.2013.05.006) and from 1.11 to 1.48 mg/g (once daily injection for 7 days https://doi.org/10.1016/j.biopha.2018.09.064); the doses used were equivalent to 1.6mg in humans (i.e the usual dose). this is a BIG deal 20:13 < fenn> https://fennetic.net/irc/kurzweil_doge_meme_thumbnail.png 20:14 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Remote host closed the connection] 20:16 < sphertext> if you didn't know, the thymus is critical to the immune system and it peaks in adolescence. afterwards, it slowly DISAPPEARS. by old age, your entire thymus turns into a useless blob of fat. 20:18 < fenn> why not just grow new thymus tissue in a dish 20:19 < fenn> one of george church's many fiefdoms is doing that, thymmune therapeutics 20:21 < fenn> no info on the website. "Thymmune’s machine learning-enabled thymic engineering platform allows for the mass production of iPSC-derived thymic cells to restore immune function" 20:29 < sphertext> why depend on a complicated, overpriced therapy, when you can buy a non-toxic peptide and inject it yourself? 20:32 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 20:35 < fenn> presumably the thymus actually does something 20:35 < fenn> iirc it's where antigen presenting cells find t-cells 20:36 < sphertext> yes, which is why ta1 being able to regenerate its mass is such a big deal 20:37 < fenn> i must have missed that part 20:37 < sphertext> did you see my message about recover of the thymus index? 20:37 < fenn> it didn't mean anything to me at the time 20:37 < sphertext> thymus index means the weight of thymus divided by mass of specimen 20:38 < sphertext> they chemically damage the thymus to reduce its weight 20:38 < sphertext> ta1 restored it by 300% after severe loss and long term treatment and 33% after moderate loss and short term treatment 20:39 < fenn> did they keep going? 20:39 < sphertext> no 20:39 < fenn> i.e. create a huge unnatural thymus 20:40 < sphertext> actually they did create a thymus larger than baseline, using a modified form of ta1 20:41 < fenn> is the effect of thymosin-a1 simply to stimulate thymus growth? and all the beneficial effects are a result of the thymus itself? 20:41 < sphertext> baseline 1.7; after damage 0.4; after treatment with ta1 1.7; after treatment modified ta1 2.25 20:42 < sphertext> 130% thymus 20:42 < fenn> "restoring immune function in animals lacking thymus glands" sounds like it does other things too 20:43 < sphertext> yes! it's also approved as orphan drug for DiGeorge syndrome, which is characterised by complete lack of thymus from birth 20:43 < sphertext> it restores some immune function in those patients 20:44 < sphertext> its main mode of action for immune upregulation is unrelated to regenerating the thymus; this is just something new i found today, and only demonstrated by a couple of papers in mice. 20:46 < sphertext> also thymus regeneration is only observed for daily injections over at least 7 days. the normal protocol in humans is 2 injections per week, for a total of 4 injections (up to 10 for serious conditions). 20:49 < fenn> you wouldn't want the humans to be too healthy 20:49 * fenn calls WADA 20:50 < hprmbridge> nmz787> https://www.legacyai.life/ 20:51 < hprmbridge> nmz787> "Once you have enough responses, you can share your avatar and allow loved ones to converse with you anytime, anywhere." 20:51 < hprmbridge> nmz787> "And finally, it can be customized to suit your family members' preferences and interests." 20:51 < hprmbridge> nmz787> so I can have a digital representation of my dad (he has a lot of stories) 20:52 < hprmbridge> nmz787> but I can chat with him and exclude "boring" stories, I guess 20:53 < fenn> i had a dream a long time ago that google was doing this, interrogating old people for their wisdom and whatever 21:01 < hprmbridge> nmz787> my other thought was to just record my dad's calls by default from now on, and then just feed that to chatgpt400 (assuming that's out by the time he dies), and see what it could make of things 21:01 < hprmbridge> nmz787> but android prevents me from using a call recording app 21:01 < fenn> i don't really see the need for an AI avatar vs just a recording 21:02 < fenn> i mean we can always take a recording and turn it into a chatbot, hard to go the other direction 21:03 < hprmbridge> nmz787> filtering out two ways to tell the same story (into one coherent story internalized) with old school chatbot software? 21:04 < fenn> you could just upload videos to archive.org 21:05 < fenn> NASA has this oral history project on youtube, just recording all the engineers with a camera, a decent microphone, and an interviewer 21:05 < fenn> i don't see why we can't do that with everyone 21:06 < fenn> it would fill in a lot of details, all the stuff nobody bothered to write down 21:27 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 21:27 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 21:37 < muurkha> nmz787: how does Android prevent you from using a call recording app? 21:42 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 21:51 < fenn> https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know 22:00 * fenn considers a spherical brain in vacuum... 22:13 < fenn> i think cannell is overlooking the massive redundancy inefficiencies in the brain, in this analysis 23:14 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Ping timeout: 245 seconds] 23:26 < hprmbridge> nmz787> Muurkha https://tech.hindustantimes.com/tech/news/call-recording-apps-banned-on-google-play-store-but-there-is-a-loophole-71652362369607.html 23:28 < hprmbridge> nmz787> I guess i might be able to get some Bluetooth good 23:28 < hprmbridge> nmz787> Gizmo 23:30 < sphertext> so how long until a religious group reveals chatgpt to be their prophet? 23:33 -!- Goober_patrol66 [~Gooberpat@2603-8080-4540-7cfb-0000-0000-0000-113a.res6.spectrum.com] has quit [Quit: Leaving] 23:33 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has joined #hplusroadmap --- Log closed Thu Oct 12 00:00:53 2023