--- Log opened Sat Apr 01 00:00:19 2023 01:55 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 01:56 -!- stipa [~stipa@user/stipa] has quit [Ping timeout: 255 seconds] 01:56 -!- stipa_ is now known as stipa 02:51 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 02:57 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 03:00 -!- flooded [~flooded@169.150.254.33] has joined #hplusroadmap 03:04 -!- test_ [~flooded@146.70.147.115] has quit [Ping timeout: 255 seconds] 03:30 < jrayhawk> fenn: lol 04:38 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Ping timeout: 250 seconds] 05:02 < hprmbridge> kanzure> "The Hong-based South China Morning Post [has reported] ... project was first unveiled in the Chinese-language journal, Military Medical Science.... the military scientists say they've successfully "inserted a gene from the microscopic water bear into human embryonic stem cells and significantly increased these cells’ resistance to radiation."" 05:02 < hprmbridge> kanzure> https://www.scmp.com/news/china/science/article/3215286/chinese-team-behind-extreme-animal-gene-experiment-says-it-may-lead-super-soldiers-who-survive 05:05 < hprmbridge> kanzure> oh, I thought it said embryo; nevermind. 05:10 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:841a:1eab:15da:f17c] has joined #hplusroadmap 05:17 < hprmbridge> Perry> This isn’t quite true. Eliezer had a religious prohibition on the use of machine learning or other similar techniques to produce an AGI on the basis that doing that would mean that he and his acolytes could not fully control the super human AI that would rule over us for eternity. Therefore, it was necessary that it be built through the use of deliberate techniques, such as GOFAI, so that they 05:17 < hprmbridge> Perry> would understand precisely how it worked and could bend it to their will. I am not being figurative. This was literally the Singularitarian goal and the reason machine learning and the like were looked down on. 05:18 < hprmbridge> kanzure> this isn't about AI. it could be any pet cause. 05:19 < hprmbridge> Perry> Eliezer claimed to me in a Twitter thread that he had at least partially changed his mind on ML being prohibited, and a few years ago, bought some GPUs for a desktop and did some experiments on his own, but concluded that there was no way to actually make any progress towards his goals using these techniques, so he never suggested that anyone at MIRI take them seriously. 05:19 < hprmbridge> Perry> A very common pattern with Eliezer is “I thought about that, and there is no way that could work, so you shouldn’t try it“. 05:20 < hprmbridge> Perry> another even more common pattern, which a number of people have reported to me, and which I have experienced myself, is “don’t talk about that! It’s too dangerous to tell people about that!” 05:21 < nsh> the real question is, why do i keep having to hear about this pointless individual? 05:21 < hprmbridge> Perry> there seem to be some people here who are confused about the situation, and think that I dismiss the possibility that AI could be transformative or could be used for bad things. I’m not talking about that at all. I am talking about the fact that the debate has been hijacked by a crazy person who unaccountably is widely thought of as an expert when he has no meaningless expertise whatsoever. 05:22 < hprmbridge> Perry> meaningful 05:22 < nsh> the great thing about crazy is that you don't need to bring in over the threshold into the hearth 05:22 < nsh> *it 05:22 < nsh> you can just note it in passing and carry on 05:22 < hprmbridge> Perry> nsh: people are giving this person a platform. It doesn’t matter whether you don’t want to think about him. 05:22 < nsh> people are, indeed 05:22 < hprmbridge> kanzure> nah: well, he has been slowing down progress for a while now. and is getting media attention. it would be good to immunize people. like outside this group. 05:23 < nsh> immunize quietly 05:23 < hprmbridge> kanzure> hm? 05:23 < hprmbridge> Perry> nsh: you can’t avoid that people are listening to him. 05:23 < nsh> i can incline you to avoid making me have to by proxy 05:23 < hprmbridge> Perry> There is no way to “immunize people quietly”. 05:23 < nsh> and i have 05:23 < nsh> it's up to you what you do with that suggestion 05:23 < hprmbridge> Perry> I will be ignoring it. 05:23 < nsh> perhaps pick better friends 05:24 < hprmbridge> Perry> People in Washington interested in regulating the industry are listening to him. 05:24 < hprmbridge> kanzure> nah are you just annoyed that you have to hear about it and waste your time, or is your complaint a different weirder third thing 05:24 < nsh> it's just dull. there are a million people who are wrong 05:24 < nsh> i don't feel to fetishise any of them 05:24 < nsh> feel *the need to 05:25 < hprmbridge> Perry> do you ignore congress when it is proposing to pass a dangerous law, because you don’t want to fetishize it? 05:25 < hprmbridge> Perry> do you ignore someone trying to break into your house, because they are wasting your time? 05:25 < nsh> where a bad idea is posited, a better idea is indicated 05:25 < nsh> not a long commentary on the former 05:26 < hprmbridge> Perry> if the IRS sends you a dunning notice for taxes you don’t owe, do you conclude “these are terrible people, I must not think about them, they are wasting my time“? 05:26 < nsh> i have no truck with the IRS nor congress 05:26 < hprmbridge> Perry> They have a truck with you. 05:26 < nsh> you might think being someone who probably doesn't have a passport 05:27 < hprmbridge> Perry> it’s all fine and well to say “I will ignore things I don’t like in the world“, but the things you don’t like in the world may decide not to ignore you anyway. 05:27 < hprmbridge> Perry> I don’t understand the passport comment. 05:27 < nsh> not everyone lives in your failed state 05:27 < hprmbridge> Perry> Pick your local government. Quit being thick. 05:27 < hprmbridge> Perry> if you live in Germany, pick the Bundestag. 05:28 < hprmbridge> Perry> if you live in Britain, pick the parliament in Westminster. It doesn’t fucking matter. Quit being stupid. 05:28 < nsh> having attained to dull, your commitment thereto at least is commendable 05:28 < nsh> good day 05:29 < hprmbridge> kanzure> being able to escape the grasp of politics should be a higher goal; politics has failed and we shouldn't let it indefinitely harass people that just want to be left alone or go to other planets etc. 05:29 < hprmbridge> Perry> being able to escape politics would be nice. Unfortunately, politics does not want us to escape. Perhaps someday we can do that. At the moment, it is not easily achieved. 05:40 < nsh> meanwhile: https://scottaaronson.blog/?p=6823 05:40 < nsh> .t 05:40 < EmmyNoether> Shtetl-Optimized » Blog Archive » My AI Safety Lecture for UT Effective Altruism 05:43 < hprmbridge> yashgaroth> tbf it seems likelier that Russia and China lead a coalition of states in adopting yud's proposal, targeting a non-signatory USA 05:47 < nsh> .in 1y did EY turn out to have a startling degree of influence over the geostrategic machinations of Russia and China? 05:47 < EmmyNoether> nsh: Will remind at 31 Mar 2024 18:47:35 UTC 05:48 * nsh walks away, muttering and shaking his head 05:48 < hprmbridge> yashgaroth> a year?! I thought AI will kill everyone in the next six months 06:08 -!- Jay_Dugger [~jwd@47-185-212-84.dlls.tx.frontiernet.net] has joined #hplusroadmap 06:16 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 06:32 < Jay_Dugger> Good morning, everyone. 06:38 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 06:49 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has joined #hplusroadmap 07:02 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has quit [Ping timeout: 255 seconds] 08:19 < muurkha> Perry: nsh probably has more experience with the grasp of politics than you are imagining, as you will be aware if you read his Wikipedia page 08:20 < muurkha> yashgaroth: it's possible that AI kills everyone in the next six months, but we can make plans conditional on the alternative 08:23 < muurkha> Perry: the reason Eliezer is widely thought of as an expert is that, by publicly debating his position with actual experts over the last 20 years, he's convinced them that his point of view is worth considering 08:24 < muurkha> which is not to say that he's right, of course, or an expert on AI 08:25 < muurkha> I'm just answering your question about why people give him a platform 08:26 < muurkha> it's not just because he founded a cult and recruited people into it with a Harry Potter fanfic and then used their money to fund machine learning researchers, though that's also true 08:43 < L29Ah> .t https://youtu.be/tG7-OvAl1HE 08:43 < Muaddib> [tG7-OvAl1HE] MUSIC TO DESTROY PLANETS TO... | "Satisfactory Night Fever" | Rap EDM Song (3:53) 08:43 < EmmyNoether> No title found 08:47 < hprmbridge> Perry> https://cdn.discordapp.com/attachments/1064664282450628710/1091750608987504780/image0.jpg 09:14 < hprmbridge> lachlan> horrific 09:14 < docl> The doom scenario for AGI, should it occur, is more of a total kill scenario than nuclear. This in and of itself should not be controversial. Why are people needing this explained to them? 09:15 < hprmbridge> lachlan> Do people need that explained to them? 09:19 -!- test_ [~flooded@146.70.183.19] has joined #hplusroadmap 09:22 -!- flooded [~flooded@169.150.254.33] has quit [Ping timeout: 248 seconds] 09:24 < hprmbridge> kanzure> yeah nobody disputes that if you talk about a scenario where everyone dies then yes by definition everyone dies in that scenario. 09:25 < hprmbridge> kanzure> that's not the issue 09:28 < hprmbridge> kanzure> in fact, it's a rather unremarkable observation 09:29 < docl> depends how much of a newb the guy he was replying to was 09:29 < muurkha> yet I sometimes hear people taking issue with specifically Eliezer's point that preventing an AGI from killing everyone is more important than preventing global thermonuclear war 09:30 < muurkha> (even people who think it's important that people survive) 09:30 < muurkha> as opposed to taking issue with his estimates of how likely that outcome is in the case of unrestricted AI research, or the prospects for restricting it 09:31 < hprmbridge> kanzure> it's not about preventing a specific thing, it's about a control over a broad class of things for which the only solution is apparently widespread violence /changing your life values or beliefs to service The Cause and only the cause. 09:32 < muurkha> yeah, conditional on there being biological humans ten years from now, it would be a lot better if they weren't surviving the aftermath of a global nuclear apocalypse 09:32 < muurkha> or living under a global totalitarian regime 09:33 < muurkha> which does seem to be Eliezer's proposal; fortunately this is not politically viable 09:33 < docl> widespread violence? I'm not seeing it. he advocates something analogous to nuclear controls as a possible hail mary 09:33 < muurkha> yes, but it's something that treats CPUs as analogous to plutonium, docl 09:34 < docl> at scale, sure 09:34 < muurkha> uranium then 09:37 < docl> I can see "global totalitarian regime" as a slippery slope to look out for, but just not being able to build gigantic data centers and having all compute hardware sold end up in a govt database (for the purpose of not having large secret clusters) seem relatively small on the scale of steps in that direction. 09:38 < docl> I remember the 90's when we didn't all have powerful supercomputers and life was sort of fine 09:39 < hprmbridge> kanzure> btw, for someone that believes that intelligence is extremely dangerous and can destroy the entire universe or wherever you living thing in it, is there a reasonable explanation for why individual humans are not considered extremely dangerous in their thinking? Or is it just not a revealed preference because it would be politically untenable or dead on arrival. 09:39 < hprmbridge> kanzure> (I don't mean specifically you docl btw although if you have an answer I'd hear it. I was speaking generally about it.) 09:41 < docl> on the scale of orangutans to john von neumann, who has a better idea of the steps needed to destroy the universe? 09:49 < docl> there's also this idea EY has that AI are alien to humans and we shouldn't extrapolate from what a smart human would do. I'm not so sure about this given they are being trained on human output and tweaked by human programmers. but as they self optimize more and use ai generated outputs as data inputs more, we could see more of a disconnect. 09:49 < docl> https://www.lesswrong.com/posts/Zkzzjg3h7hW5Z36hK/humans-in-funny-suits 09:50 < hprmbridge> kanzure> I apparently have a lot more faith in smart humans to achieve big things than you or he does apparently (not sure if you specifically) 10:12 < docl> I haven't hit a point of losing my optimism thus far. I do however feel obliged to hedge my bets, which seems to minimally include not shouting down EY when he says these things in good faith 10:18 < hprmbridge> kanzure> humans can do big dangerous things, is what I mean, and we're largely okay with this and we believe in freedom and autonomy anyway 10:18 < hprmbridge> Perry> at some point soon, someone is going to start murdering people, or setting off biological weapons, because of Eliezer. It might be one of Eliezer‘s people with Eliezer‘s blessing, it might not be with his blessing. None of this has anything to do with discussing AI risk. None of this has anything to do with futurism. This has to do with the fact that Eliezer is a literal cult leader, and needs to 10:18 < hprmbridge> Perry> be separated from the rest of the discussion. He is mentally ill, he has made a lot of his followers mentally ill, he is dangerous. We could all die because of him. I am not being hyperbolic. I mean this sincerely. 10:19 < hprmbridge> Perry> there is a tendency among certain people to believe that all sets of ideas need to be engaged on their surface level. It’s fine to discuss AI risk if you want to. That has nothing to do with the Eliezer situation. 10:20 < hprmbridge> Perry> people seem to keep thinking that if you deny that Eliezer is sane you are denying that AI risk is real. 10:20 < hprmbridge> Perry> The two are not the same. 10:26 < docl> I actually tend to feel more upset about EY being treated unfairly than I do disagreements about AGI risk. Maybe this isn't particularly rational of me, it's just how I feel. 10:27 < hprmbridge> Perry> He lost the right to be treated “fairly” when he began to inspire people to commit murder. 10:28 < hprmbridge> Perry> I personally like him. He’s a nice guy. He’s funny, he’s smart, he’s witty. He’s a lot of fun at a party. He also built a cult around himself and lost track of reality. He has also started inspiring spinoff cults. There is no question of “fairness“ here anymore. On a personal level I wish him no ill. but it’s not at that level anymore. 10:29 < nsh> there are many things one can be full of, and oneself is amongst the worst 10:30 < hprmbridge> Perry> https://cdn.discordapp.com/attachments/1064664282450628710/1091776491806806107/image0.jpg 10:40 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has joined #hplusroadmap 10:44 < docl> that pic seems more like self aware humor. (to the effect that he knows he comes across as know it all) 10:49 < hprmbridge> kanzure> Unfortunately it seems I did not take notes for that talk in particular https://diyhpl.us/wiki/transcripts/singularity-summit-2009/ 10:54 -!- o-90 [~o-90@gateway/tor-sasl/o-90] has quit [Ping timeout: 255 seconds] 10:58 < docl> the bay area which seems to have a big endemic cult problem that seems not have been started by EY. I wonder to what degree he's culpable for the formation of these splinter groups in reality (in any sense other than AI doom scenario giving them a doomsday to fixate on). they seem like interlopers from my (geographically remote, not entirely plugged in) perspective. 11:30 < hprmbridge> docl> https://twitter.com/ESYudkowsky/status/1642229374656348161 11:34 < hprmbridge> kanzure> huh? his original reply was not about nukes. nobody said it was. 11:35 < hprmbridge> kanzure> the question was "how many people should be allowed to die to prevent AGI" and he gave his answer. 11:42 < hprmbridge> docl> https://twitter.com/ESYudkowsky/status/1642234394307067904 11:43 < nsh> is mary to blame for typhoid? 11:43 < nsh> regardless i'd steer clear 11:44 < nsh> a simple notion still seemingly lost on ye 11:44 < nsh> dwell on matters more fitting 11:44 < nsh> perhaps, but 'tis your own trip 11:44 < hprmbridge> kanzure> speaking of which 11:44 < hprmbridge> kanzure> what are you dwelling on lately nsh 11:45 < nsh> https://ora.ox.ac.uk/objects/uuid:ceef15b0-eed2-4615-a9f2-f9efbef470c9/download_file?file_format=application%2Fpdf&safe_filename=Thesis_GISCARD.pdf&type_of_work=Thesis 11:45 < nsh> the fundamental theorem of arithmetic for walks on finite graphs 11:45 < nsh> and the implications for the couching of quantum theory in matrix algebra 11:45 < nsh> and American Gods, by Neil Gaiman 11:45 < nsh> amongst many other things, all of which are better patter than EY has ever shat out his fool mouth 11:47 < nsh> https://www.amazon.co.uk/Quantum-Pictures-New-Understand-World/dp/1739214714 11:47 < nsh> .t 11:47 < EmmyNoether> No title found 11:47 < nsh> look harder 12:12 < muurkha> docl: the problem isn't not having powerful supercomputers, it's the proposed War on Powerful Supercomputers 12:14 < muurkha> similarly, it would be mostly fine if we didn't have cocaine, except for people who live in the Andes; but the War on Some Drugs has created a lot of problems including civil forfeiture, KYC, AML, and massive profits for the cartels that are able to deliver the drugs anyway 12:16 < muurkha> Perry: I don't know if you noticed the conversation last night with fenn, but some of Ziz's followers (a splinter group of LW/SIAI/MIRI etc.) did actually attack their landlord with a sword, so I think it's likely that someone is going to start murdering people because of Eliezer's teachings, given that they have already started trying 12:17 < muurkha> but then again, they're USAns, so they try to kill people at the drop of a hat ;) 12:18 < L29Ah> https://en.wikipedia.org/wiki/List_of_countries_by_intentional_homicide_rate US doesn't look too bad 12:19 < muurkha> yeah, the US is kind of in between rich countries and poor countries by that measure 12:19 < muurkha> but it's way out in front when it comes to unhinged people committing spectacular mass murders 12:20 < L29Ah> not your typical kindergarten with plastic utensils only 12:26 < muurkha> it's way out in front of countries with much higher weapons availability too 13:05 < hprmbridge> docl> Ziz is like, some sociopath who inserted themselves into the group at some point I guess? I have heard nothing good about them so far 13:05 < hprmbridge> yashgaroth> aww that yud slide was 'shopped https://www.flickr.com/photos/mikek/146209889/ 13:06 < muurkha> Well, I think she's pretty crazy, but I don't think sociopathy is her problem 14:29 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 14:36 -!- Llamamoe [~Llamamoe@46.204.76.182.nat.umts.dynamic.t-mobile.pl] has quit [Quit: Leaving.] 17:09 -!- codaraxis [~codaraxis@user/codaraxis] has joined #hplusroadmap 17:33 -!- codaraxis [~codaraxis@user/codaraxis] has quit [Quit: Leaving] 18:23 < docl> hmm. I'm not quite seeing the drug war analogy muurkha. I mean, prohibition tends to do badly (creating black markets and so on) since people hate having their drugs taken away. but this would be more analogous to regulatory blocking of additional growth/scaling, which seems like a thing we've seen happen successfully by accident plenty of times 18:24 < docl> and in context, we're talking about a world where it is seen widely as a harm, like say asbestoes production. nor are we saying no more production of other related things (can still work on refining AI on existing hardware, etc) 18:26 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:841a:1eab:15da:f17c] has quit [Quit: Leaving] 19:06 < muurkha> we're talking about registering and tightly controlling computational power, docl, with a worldwide computational power authority that calls in airstrikes on computers suspected of being too powerful 19:16 < docl> very large computers big enough to plausibly produce ASI by brute force 19:22 < fenn> it's not remotely analogous to asbestos control for the simple reason that you have to prevent 100% of the controlled activity (if you believe in the premise that AI will instantly go evil and also be superpowered) 19:22 < fenn> asbestos harm scales linearly with asbestos production 19:22 < fenn> something like that anyway 19:24 < fenn> nuclear weapons, bioweapons, and if there are other things in the same category, i'm not sure i'd want to know about them 19:24 < docl> preventing 100% of AI is neither feasable nor necessary in the given scenario, it's more about not scaling up compute quickly any more and not allowing anyone to start an arms race that causes this to happen 19:25 < fenn> why is it okay to have 1% of AI research in that scenario? 19:27 < fenn> rationalist dogma says that the default is for AI to not care about human life 19:33 < hprmbridge> Perry> emphasis on dogma. 19:33 < hprmbridge> Perry> Extraordinary claims require extraordinary evidence. 19:34 < fenn> the best AI so far acts like a human trapped in a computer, because that's what it was trained on 19:34 < hprmbridge> Perry> Eliezer’s evidence is recycling my late night speculation from 30 years ago into a religion. It’s been disturbing to watch for a long time. 19:37 < muurkha> asbestos is, as far as I know, legal everywhere 19:38 < muurkha> there are no asbestos raids on suspected asbestos factories 19:38 < jrayhawk> https://en.wikipedia.org/wiki/Asbestos#Complete_bans_on_asbestos 19:40 < muurkha> not in any of those 66 countries. your house will not be blown up by an air strike because it contains asbestos. there are no underground asbestos trafficking networks, as far as I know 19:44 < docl> are there perhaps examples of things that have been successfully banned? 19:45 < muurkha> well, asbestos has been pretty successfully banned 19:46 < muurkha> I mean you can get the stuff but there isn't much of a market 19:46 < muurkha> CFCs too 19:46 < muurkha> lead paint, leaded gasoline 19:48 < fenn> are there examples of things that have been successfully banned worldwide, 100% 19:49 < fenn> (let's just ignore the screaming economic opportunity if you do cheat and develop AI) 19:49 < fenn> i feel like this is going in circles 19:49 < docl> fenn: we're not talking about the idea that all AI is harmful, current AI doesn't seem to be. the idea is that ASI above a certain level would be. and the prerequisite for that would seem to be throwing enough more compute at it. day to day AI operations like chatGPT aren't especially worrying. 19:50 < jrayhawk> smallpox is 100% contained 19:56 < muurkha> fenn: not that I know of, no 19:56 < muurkha> smallpox does still exist, though, and its genome is open source 19:57 < jrayhawk> its development has been, broadly speaking, frozen for quite some time. 19:57 < muurkha> yeah 19:58 < jrayhawk> that said, sequences are published and gene synthesis keeps getting cheaper 19:58 < muurkha> probably there's a conspiracy theory that metastable hafnium weapons have been successfully banned 20:18 < jrayhawk> https://www.theguardian.com/world/1999/oct/17/balkans https://www.bbc.com/news/world-middle-east-56734657 https://www.nytimes.com/2017/04/18/world/asia/north-korea-missile-program-sabotage.html 20:39 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 20:41 -!- stipa [~stipa@user/stipa] has quit [Read error: Connection reset by peer] 20:41 -!- stipa_ is now known as stipa 21:26 < hprmbridge> cpopell> we don't scale with hardware/don't have the ability to copy ourselves, in theory 21:27 < hprmbridge> cpopell> (I am not a yuddite) 21:30 < muurkha> amusing comments by Aaronson: https://scottaaronson.blog/?p=7188 21:38 < fenn> “why six months?” "we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later. So now we’re writing papers about GPT-4. Will we again have to throw our work away when OpenAI releases GPT-5?" 21:45 < fenn> sounds like they need to use GPT-4 to help write their papers 21:55 -!- flooded [~flooded@146.70.202.99] has joined #hplusroadmap 21:59 -!- test_ [~flooded@146.70.183.19] has quit [Ping timeout: 248 seconds] 22:08 < fenn> it's ironic that all of this future shock was caused by OpenAI delaying the release of GPT-4 to give them enough time to do an extra thorough safety evaluation 22:09 < fenn> moore's law was more of a target to hit than anything 22:09 < fenn> if language model improvements were released on some predictable schedule, that'd fix a lot of the issues that led to people panicking 22:10 < hprmbridge> cpopell> I disagree, but counterfactuals are hard to reason about 22:14 < fenn> the main problem is that breakthroughs don't happen predictably on schedule 22:15 < hprmbridge> cpopell> I predict that if breakthroughs happen predictably on schedule people will continue to be stressed 22:18 < fenn> yes, but less so. lab animal care technicians are still stressed when their beloved lab rat dies, but less so when they are given a date of impending doom 22:19 < hprmbridge> cpopell> Okay I predict doom will not ratchet down with predictably scheduled breakthroughs. 22:19 < fenn> is there a psychologist in the house 22:21 < fenn> "now NVidia is realizing that selling GPUs anywhere to anyone is pretty stupid, it’d be much better if they only made them accessible through their own proprietary clouds and APIs" :\ 22:36 < superkuh> That's a good way to get people to develop opencl more. 22:36 < superkuh> Or something else AMD'ish. 23:03 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 23:34 -!- stipa [~stipa@user/stipa] has quit [Read error: Connection reset by peer] 23:38 -!- stipa [~stipa@user/stipa] has joined #hplusroadmap --- Log closed Sun Apr 02 00:00:20 2023