--- Log opened Fri Mar 31 00:00:16 2023 --- Day changed Fri Mar 31 2023 00:00 < hprmbridge> nmz787> https://onlinelibrary.wiley.com/doi/10.1111/ele.13331 00:00 < hprmbridge> nmz787> .title 00:01 < hprmbridge> nmz787> Emmynoether .title 00:01 < hprmbridge> nmz787> Flowers respond to pollinator sound within minutes by increasing nectar sugar concentration 00:01 < hprmbridge> nmz787> https://www.cell.com/cell/fulltext/S0092-8674(23)00262-3 00:02 < hprmbridge> nmz787> Sounds emitted by plants under stress are airborne and informative 00:04 < hprmbridge> nmz787> https://www.cell.com/cms/attachment/fd88f948-9738-46b3-ad79-9692b0d811b4/figs3.jpg 00:04 -!- Llamamoe [~Llamamoe@46.204.68.80.nat.umts.dynamic.t-mobile.pl] has quit [Quit: Leaving.] 00:05 < hprmbridge> nmz787> 20-100kHz compression into headphones sounds like a fun idea 00:06 < hprmbridge> nmz787> Hopefully some MEMS microphone has that region being reasonably sensitive 00:14 -!- Chiester [~Chiester@user/Chiester] has quit [Remote host closed the connection] 00:14 -!- Chiester [~Chiester@user/Chiester] has joined #hplusroadmap 00:40 < hprmbridge> eleitl> Can someone up to date in deep learning tell me whether the algorithmic advances listed in https://dynomight.net/gpt-2/ are all absolutely essential to achieve the current learning performance, on current accelerator hardware? Which of them are more important than others, and which were harder to invent? 03:38 < hprmbridge> kanzure> olfactory implant https://www.karger.com/Article/FullText/529563 04:06 < hprmbridge> eleitl> Given nVidia H100 and announced AMD MI300 I found this HBM3 discussion interesting https://semiengineering.com/improving-performance-and-power-with-hbm3/ 04:07 < hprmbridge> eleitl> Not quite as insane cooling requirements and the astronomic price tag as WSE-2 of CS-2. 04:08 < hprmbridge> kanzure> we should build our own cluster, as is the style of our time 04:08 < hprmbridge> eleitl> Good thing about ROCm is that the entire compute stack is open. For those of us who're into open source religions like me, of course. 04:09 < hprmbridge> eleitl> Right now you can probably do a lot with the cloud, though it gets very expensive fast. 04:14 -!- stipa_ [~stipa@user/stipa] has joined #hplusroadmap 04:16 -!- stipa [~stipa@user/stipa] has quit [Read error: Connection reset by peer] 04:16 -!- stipa_ is now known as stipa 04:46 < L29Ah> only the Official Serbian Church of Tesla can save your polyphase intrinsic electric field, known to non-engineers as "the soul" 05:08 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:4d10:22d8:1d85:c82d] has joined #hplusroadmap 05:54 -!- Jay_Dugger [~jwd@47-185-212-84.dlls.tx.frontiernet.net] has joined #hplusroadmap 05:54 < Jay_Dugger> Hello, everyone. 06:14 < hprmbridge> eleitl> Scott Aaronson, by way of HN https://news.ycombinator.com/item?id=35385452 06:38 -!- flooded is now known as _flood 07:31 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 08:05 -!- flooded [~flooded@146.70.202.51] has joined #hplusroadmap 08:08 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 08:09 -!- _flood [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 255 seconds] 10:35 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 10:36 -!- Llamamoe [~Llamamoe@46.204.68.80.nat.umts.dynamic.t-mobile.pl] has joined #hplusroadmap 10:48 < kanzure> so are we gonna just let yudkowsky terrorize the world or what 10:49 < muurkha> are you going to write some accelerationist literature or something? 10:49 < muurkha> that would be great 10:49 < muurkha> we're halfway across the collapsing bridge; this would be the wrong time to slam on the brakes 10:49 < hprmbridge> kanzure> not sure literature is what is needed? 10:51 < hprmbridge> Perry> I think the best disinfectant is sunlight. 10:52 < hprmbridge> Perry> I think a lot of people are unfamiliar with the fact that he started out wanting to build a globally controlling superhuman AI that he and his followers would control, on the basis that since someone was going to do it, it was best that “good” people like he and his crowd do it first. The dripping irony of the whole “don’t create the torment nexus!” discourse never ends. 10:53 < hprmbridge> Perry> I think a lot of people also believe he’s some sort of expert on machine learning and AI. He’s not. He’s never done research in the area and I don’t think he’s anything more than read some papers. 10:54 < Jay_Dugger> Why interrupt someone making an error? 10:56 < Jay_Dugger> As you point out, most don't know his history. 11:05 < Jay_Dugger> That irony will someday come to public attention and weaken his jeremiads. 11:06 < Jay_Dugger> (Not sure the label is quite right, but I think it conveys my point.) 11:07 < Jay_Dugger> (And yes, it sets him up for a "Road to Damascus" conversion storyline.) 11:12 < hprmbridge> Perry> The trick, I think, isn’t to interrupt him. It’s to let as many people as possible hear his message, with appropriate context. 11:14 < docl> I, for one, think he makes many sensible points. 11:14 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 11:18 -!- flooded [~flooded@146.70.202.51] has quit [Ping timeout: 255 seconds] 11:19 < hprmbridge> cpopell> It's convenient that I don't need to support his reasoning, history, extent, lifestyle, individual choices to support him ringing the alarm bells on this element; but it would be highly convenient if there was someone who I aligned with better 11:21 < hprmbridge> kanzure> yes he makes a few sensible points and he is coherent at least in terms of his values. sure. 11:22 < hprmbridge> cpopell> he's a moderately useful tool where I wish I had a better one 11:25 < docl> I'm sure he wishes so too 11:25 < hprmbridge> cpopell> tbh I agree 11:31 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 12:28 -!- balrog [znc@user/balrog] has quit [Quit: Bye] 12:30 -!- Llamamoe [~Llamamoe@46.204.68.80.nat.umts.dynamic.t-mobile.pl] has quit [Quit: Leaving.] 12:31 -!- balrog [znc@user/balrog] has joined #hplusroadmap 12:48 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 248 seconds] 13:27 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 13:27 < fenn> another alpaca competitor LLM, this time using GPT-4 to evaluate chatbot performance: https://vicuna.lmsys.org/ 13:27 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 13:30 < hprmbridge> jasoncrawford> I wrote a think on AI risk, draft for comment https://progressforum.org/posts/baPwkBDa9f9b8ja2z/wizards-and-prophets-of-ai-draft-for-comment 13:31 < hprmbridge> jasoncrawford> (probably leaning in the same direction as many here, if not as hard-hitting as you would like!) 13:35 < hprmbridge> kanzure> asilomar was a huge setback for the human specied 13:35 < hprmbridge> kanzure> er, species 13:39 < fenn> asilomar had no effect on anything, besides setting a precedent that declaring moratoriums is a thing to do 13:42 < fenn> you should rail against material transfer agreements 13:45 < docl> I don't want laws monitoring your laptop. Nor your home gaming PC or GPU temple. These are unlikely to produce ASI. I feel somewhat differently about skyscraper sized GPU clusters visible from space. There's probably some sane middle ground here. The moral panic surrounding EY's policy proposal is not helping. 14:23 -!- flooded [~flooded@169.150.254.33] has joined #hplusroadmap 14:27 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 255 seconds] 14:34 < L29Ah> jasoncrawford: nice text, thanks 14:35 < L29Ah> i would elaborate on the takeoff scenarios for the less abstract-minded, and maybe compared these to historical references to human empires/cultures, if anything 17:01 < fenn> what are these red bar over eyes avatars? example https://pbs.twimg.com/profile_images/1580361082387824640/7ifKuboY_400x400.jpg 17:10 < hprmbridge> Perry> I’m pretty disappointed that so many people here take Eliezer seriously. 17:12 < fenn> i'm disappointed that so many people dismiss eliezer as "a lolcow" instead of engaging with the arguments 17:13 < fenn> but nobody cares about our feelings 17:13 < hprmbridge> Perry> I think you mistake what I’m saying. I think that he and his people will start murdering people, I think I’m on that list, and I don’t feel comfortable being around people who support him. 17:15 < hprmbridge> Perry> and as for his arguments, I have known then longer than he has. Robin, me, and a bunch of other people invented those arguments. 17:16 < fenn> it's understandable to have that fear. i don't think they will go down the terrorist route because in order to be even remotely successful they will need state backing, and terrorism looks really bad 17:16 < hprmbridge> Perry> I admit that his followers will probably try to murder Sam Altman, Demis Hassabis, and a bunch of other people well before they get to me, but it doesn’t make me feel particularly more comfortable. 17:19 < hprmbridge> Perry> They are a literal apocalyptic doomsday cult with all that implies. I do not feel like “failing to engage with him” is the problem here. 17:20 < hprmbridge> Perry> And quite frankly I believe he lies about what he believes would be necessary and reasonable by his lights in order to seem more socially acceptable. 17:24 < hprmbridge> Perry> He literally began his career by proposing that he was going to build a super human AI before anyone else so that he could control it and rule the world. I wish that was figurative. It is not. It is literally true. When he finally realized that he wasn’t capable of doing that, he started turning to this apocalyptic doom chanting. Now that he realizes that not enough people are listening to him, I 17:24 < hprmbridge> Perry> worry that at least some of the people he influences will turn to acts of violence. The rhetoric is that there is literally nothing more important in the universe that stopping AI research, that the people doing it are evil scum, and that no cost is too much to pay to stop them. In the light of that, everything, normal morality, telling the truth, prohibitions on mass murder, all of that goes out 17:24 < hprmbridge> Perry> the window. 17:26 < fenn> well, i wish i could find some post titled "why we shouldn't be terrorists" but i can't. sorry 17:26 < hprmbridge> Perry> It’s a lie. He was lying. 17:27 < hprmbridge> Perry> he just doesn’t want anyone to go off and arrest him before he manages to stop everything. 17:27 < fenn> i'm just trying to explain how i see it, not referencing any explicit policy that i'm aware of 17:27 < hprmbridge> Perry> and it’s obvious, from hints he drops, reports from people in his orbit, everywhere around him, that it’s a lie. 17:28 < fenn> please explain what is the lie you are referring to? 17:28 < hprmbridge> Perry> He believes and violence. 17:28 < hprmbridge> Perry> in 17:29 < hprmbridge> Perry> his claims that he does not believe in violence are an outer doctrine. The inner doctrine is that anything is permissible. 17:29 < fenn> i agree. but how would violence be effective? 17:29 < hprmbridge> Perry> he’s running a mystery religion. They believe that lying to people for the greater good is OK. 17:29 < hprmbridge> kanzure> most totalitarian regimes are enforced by violence 17:29 < hprmbridge> Perry> this isn’t just some bunch of geeks talking about the future. It’s a cult. 17:30 < hprmbridge> kanzure> (including global 'absolute' moratoriums) 17:30 < fenn> kanzure: we are not in a lesswrong totalitarian regime (yet) 17:30 < hprmbridge> Perry> people may get murdered. Completely innocent people. All in the name of stopping a threat that isn’t real. 17:30 < fenn> as much as you'd like to score edgy points for railing against them, "rationalist" is still a pejorative, weird thing on the internet 17:30 < hprmbridge> Perry> It isn’t funny. 17:30 < hprmbridge> kanzure> I don't think any of this is news about eliezer. certainly not to this crowd. preaching to the choir and such. 17:30 < hprmbridge> Perry> They believe lying to outsiders is completely OK. 17:31 < hprmbridge> Perry> necessary in fact. Anything is permissible if it will stop the AI apocalypse in their minds. Anything. 17:31 < hprmbridge> kanzure> earlier someone was saying 'write acceleration literature to counter' (paraphrasing); I don't know if that would be useful. why not just get back to working on tech? 17:31 < hprmbridge> Perry> The only thing that makes them less than a deadly threat is that they are mostly incompetent. 17:32 < hprmbridge> kanzure> not sure i understand fenn's rationality comment 17:32 < hprmbridge> kanzure> AI discourse doesn't get to claim the banner of rationality 17:33 -!- test_ [~flooded@146.70.147.115] has joined #hplusroadmap 17:33 < hprmbridge> Perry> Eliezer doesn’t anyway. He makes extraordinary claims, and he provides extremely thin evidence for them. on the basis of this thin evidence, he proposes monstrous policies. 17:34 < hprmbridge> Perry> he tells people, day and night, that if something isn’t done, all life in the universe will end. 17:34 < fenn> kanzure because you're acting as if rationalist memes had a significant foothold in the political sphere, when they don't. the only policy i can see that has been enacted so far is attempting to limit chinese access to silicon, which was already an objective. (vs limiting access to AI software) 17:34 < hprmbridge> Perry> you tell people that over and over again, and if they believe you, what do you think their reaction is? 17:35 < hprmbridge> kanzure> i think he's getting more publicity lately, and society is kinda primed for precautionary principle policy 17:36 < hprmbridge> kanzure> the bankless podcast reactions were pretty telling 17:36 < hprmbridge> Perry> The whole rationality thing came long after the singularitarian thing was failing. The primary goal of the cult, now and always, has been to be the first to control a singeleton ASI. And with that, to control the world. I am not joking. 17:36 < hprmbridge> kanzure> people are addicted to depression 17:36 -!- flooded [~flooded@169.150.254.33] has quit [Ping timeout: 276 seconds] 17:36 < hprmbridge> Perry> I am not being hyperbolic. I’m not making this up, I am not being figurative. 17:37 < hprmbridge> kanzure> fenn is strongly familiar with this background btw 17:37 < hprmbridge> kanzure> doesn't need repeating 17:37 < hprmbridge> Perry> then why does he think that we need to engage more with Eliezer’s ideas? 17:38 < hprmbridge> kanzure> oh the lolcow thing? 17:38 < fenn> because not doing so is intellectually dishonest 17:38 < hprmbridge> Perry> Horse shit. 17:38 < hprmbridge> kanzure> "haha he is fat" is worse than "this is ridiculous " 17:38 < fenn> there really is a risk that AI will flip out and try to kill everyone 17:39 < fenn> politics makes people take sides and then a nuanced consideration becomes impossible 17:39 < fenn> and here we are 17:40 < hprmbridge> kanzure> his position is not that there is simply a risk 17:40 < hprmbridge> Perry> there are all sorts of risks. Eliezer doesn’t think there is “a risk”. He thinks that it is over 99.9% certain that we are all going to die unless AI research is halted as soon as possible. The man has been a paranoid nutcase for years. The last time I tried to engage him seriously, he told me that it was a vital importance that I’m not telling anyone my incredibly anodyne ideas about using 17:40 < hprmbridge> Perry> machine learning for formal verification. 17:40 < hprmbridge> kanzure> that's a mischaracterization... 17:41 < hprmbridge> Perry> he doesn’t think there’s a 10% risk here. He doesn’t think there’s a 50% risk here. He doesn’t think there’s a 75% risk here. He is absolutely certain that the only thing that matters, even at the risk of nuclear warfare, even at the risk of the deaths of most of the people on the planet, is stopping AI research. 17:41 < hprmbridge> Perry> it’s not fucking funny. 17:41 < fenn> what i personally take issue with is the *assumption* in the rationalist memeplex that this will all happen instantly and magically, without any warning signs, german idealism made real 17:42 < hprmbridge> kanzure> hm? surely there are gradients of takeoff beliefs on lesswrong 17:42 < fenn> all the really crazy stuff comes from the hard takeoff scenarios tho 17:43 < hprmbridge> Perry> and there are a whole lot of people in his circle who are absolutely convinced that he’s correct. And they believe that we are all going to die if something isn’t done. And then there all the people that they are telling over and over again that we are all going to die if something isn’t done. And what do you think happens when enough people believe that we are all going to die if something isn’t 17:43 < hprmbridge> Perry> done? 17:44 < hprmbridge> Perry> stop talking about “we should engage his ideas seriously” for a moment. Start asking yourself what the inevitable result of this constant drumbeat saying that there is literally nothing important on earth other than stopping AI research might result in. 17:44 < hprmbridge> Perry> Who, if they really really believe that, is going to hold back? 17:45 < jrayhawk> it might result in solving the coordination problem gracefully instead of solving the coordination problem catastrophically 17:45 < fenn> the other reason to engage is that ridiculing someone making a call to action just sets up more polarization and true-vs-false us-vs-them thinking. fence sitters are turned into loyal patrisans (cult members) 17:45 < hprmbridge> kanzure> fenn is right that "he's an lolcow" is not a sufficient engagement. I don't think fenn has advocated for the general public studying his ideology carefully. but I'll let him speak for himself. 17:46 < fenn> it may be that we are well past the point where ideologies are formed 17:47 < fenn> i am still waiting for evidence either way 17:47 < hprmbridge> kanzure> a good reason to not engage is "it's brain cancer" but not "lol fat man" 17:47 < fenn> my position has always been that AI doom theories are unfalsifiable without further AI experimentation 17:48 < fenn> so tickle the dragon's tail and hope for the best 17:51 < fenn> just saying "it's brain cancer" is not a good vaccine 17:51 < hprmbridge> kanzure> no it's not 17:52 < hprmbridge> Perry> when have I said “he’s a lol cow“? 17:52 < fenn> it was someone else, not important 17:53 < hprmbridge> Perry> The SBF thing was a warning btw. 17:53 < fenn> perry, your description of their cult-like tendencies are a good way to make people think twice, fwiw 17:53 < hprmbridge> kanzure> I thought you were deducing fenn's belief or position from his comment "fenn> i'm disappointed that so many people dismiss eliezer as "a lolcow" instead of engaging with the arguments" 17:53 < hprmbridge> Perry> Think twice in what sense? 17:54 < fenn> about joining the anti-AI crusade 17:54 < hprmbridge> kanzure> yeah the media has picked up on the SBF/EA connection but not that it's directly from eliezer 17:54 < hprmbridge> Perry> SBF was literally willing to steal billions of dollars for the cause. 17:54 < hprmbridge> Perry> With apparently little compunction. 17:55 < hprmbridge> kanzure> https://davidgerard.co.uk/blockchain/2023/02/06/ineffective-altruism-ftx-and-the-future-robot-apocalypse/ 17:56 < hprmbridge> kanzure> https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried#xj4y7vzkg 17:57 < jrayhawk> paywall bypass https://archive.is/sLihW 17:57 < docl> I just don't get how people manage to convince themselves EY is evil. It doesn't make sense. He consistently doesn't cross the lines an evil person would in his situation. Doesn't even sugar coat things. 17:58 < hprmbridge> kanzure> and now we have bank runs from this yay 17:58 < hprmbridge> Perry> he’s not evil. He’s crazy. There’s a big difference. 17:58 < hprmbridge> kanzure> (sarcastic yay) 17:58 < docl> https://twitter.com/ESYudkowsky/status/1641953192761266177 17:58 < hprmbridge> Perry> and he’s done everything in his power to make himself crazier. 17:59 < hprmbridge> Perry> fucking lie. he didn’t suggest nuclear strikes. What he said was that the risk of nuclear war was fine. 17:59 < docl> fine? 17:59 < hprmbridge> Perry> he keeps doing this. Quibbling about details in an attempt to have it both ways. 18:00 < docl> who's the totalizer in this situation? 18:00 < hprmbridge> Perry> he has suggested that bombing China would be fine even if a nuclear war resulted. The fact that he didn’t suggest doing it with nuclear weapons is meaningless. 18:00 < hprmbridge> Perry> docl, are you one of them? 18:00 < hprmbridge> Perry> like do you live in one of the houses? Do you go to the sex parties? 18:00 < docl> no! 18:00 < hprmbridge> Perry> Well at least there’s that. 18:01 < docl> I've never even met any of them in person. 18:01 < fenn> lol sex parties 18:01 < fenn> i have been to rationalist parties and there is no sex going on 18:01 < docl> I'm more one of this group in the sense that I'm actually motivated to come hang in person 18:01 < hprmbridge> Perry> fenn: so you weren’t invited. I’ve been. 18:01 < hprmbridge> Perry> I think you don’t understand what’s been going on. 18:02 < fenn> there's a lot more money and grifters floating around recently 18:02 < hprmbridge> Perry> This is a hopeless conversation. 18:03 < hprmbridge> Perry> I don’t see what the point is. You guys are acting like this is some sort of normal conversation that we’re having about some sort of normal person who believes that maybe some change to FDA policy would be good or bad. 18:04 < hprmbridge> Perry> “Why aren’t you engaging with his points? We have to consider every persons ideas carefully!” 18:04 < fenn> no, i'm well aware of the high prevalence of crazy. a former roommate was put in jail for attempted murder, another one died after being involved in a splinter faction 18:04 < docl> He's weird, but not crazy. Sort of the opposite. I am nowhere near as convinced ASI won't sort itself out as he is, but I wouldn't misrepresent his position. 18:05 < fenn> prison* 18:05 < hprmbridge> Perry> I am not misrepresenting his position. 18:05 < jrayhawk> we extincted all the parrots in north america to make hats. pathological nash equilibria are almost always solved by application of a coordinated monopoly on violence. failing to think through the consequences of pathological nash equilibria and coordinate to respond to them is irresponsible. 18:06 < hprmbridge> Perry> So you’re saying we should bomb China? 18:06 < hprmbridge> Perry> Only using conventional weapons of course. 18:06 < hprmbridge> Perry> we mustn’t represent Eliezer. He didn’t suggest using nuclear weapons for the bombings. 18:06 < hprmbridge> Perry> misrepresent 18:06 < jrayhawk> if they defect on that particular tragedy of the commons, yes. 18:07 < hprmbridge> Perry> I’m out. 18:07 < docl> It's a hypothetical. 18:07 < jrayhawk> no need to be out; we can at least agree that advancement of humanity is our only plausible way forward 18:08 < fenn> in the above yudkowsky tweet i still havent finished reading, he says 'adding "first use of GPU farms" to the forbidden list' making the list [first use of nuclear weapons, first use of GPU farms] - this would imply that nuclear weapons are an appropriate response to GPU farms 18:08 < hprmbridge> Perry> no, I am out. 18:09 < docl> "The taboo against first use of nuclear weapons continues to make sense to me. I don't see why we'd need to throw that away in the course of" 18:10 < jrayhawk> https://www.douglasjacoby.com/heretic-scum/ 18:11 < docl> "no, I do not expect that policy proposal to be adopted, in real life" 18:12 < docl> from the followup tweet to that https://twitter.com/ESYudkowsky/status/1641953192761266177 18:12 < fenn> perry has left the chat (but not the list of members) 18:14 < hprmbridge> kanzure> There are some forms of bombs that are not nuclear in nature. 18:15 < hprmbridge> kanzure> I thought that's why Perry said conventional ? 18:15 < hprmbridge> kanzure> the close reading of the quote under discussion 18:15 < fenn> it doesn't matter. you can't attack a nuclear power without expecting a response, and the expectation of nuclear escalation is what has stopped this sort of thing from happening in the last 70 years 18:19 < fenn> fwiw i don't think an international AI technology control regime is possible 18:19 < jrayhawk> Probably not, but we probably thought the same hing about nuclear weapons in 1950. 18:20 < jrayhawk> Now we have pretty impressive worldwide policy coordination bodies. 18:20 < hprmbridge> kanzure> or we could just not do that. 18:21 < fenn> the smart thing to do, what von neumann pushed for, was to bomb the soviets before they gained nuclear bombs themselves, winning the cold war before it started 18:21 < fenn> instead of waiting until the supply of weapons developed to such a degree that it meant the certain destruction of the entire planet 18:22 < hprmbridge> kanzure> or we could have focused on multiplanetary 18:22 < fenn> we're in a similar situation now. we can delay and delay until there's a sufficient hardware overhang to allow a breakout AI go foom, or proceed as fast as is possible with existing technology 18:23 < hprmbridge> kanzure> btw some might say resisting the siren call of totalizing coordination to itself be a responsible act 18:24 < fenn> forgoing positive uses of nuclear power got us into this climate change mess 18:25 < fenn> and countless other benefits that are too hard to see because they didn't happen 18:25 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has quit [Read error: Connection reset by peer] 18:25 < fenn> would there be a housing crisis if materials were too cheap to meter? etc. 18:25 < jrayhawk> it's not strictly totalizing as long as at least four parties are playing in the same iterated prisoner's dilemma 18:26 < jrayhawk> four parties capable of defecting 18:26 < jrayhawk> (parties that loudly precommit to "always cooperate" are inherently morally subservient to parties capable of defecting) 18:28 < fenn> it's not even an iterated prisoner's dilemma. this only happens once, as far as we know 18:29 < jrayhawk> i think the rosenberg's decision to add more players to the game saved a lot of lives, but it's hard to tell with a utility vacuum like the USSR 18:30 < fenn> you think the cold war was better than ripping off the bandaid in the 1950s? 18:31 < fenn> do i need to mention korea, vietnam, afghanistan, south africa, etc. etc. 18:31 < jrayhawk> Yeah; U.S. hegemony would mean the U.S. would no longer have any pressure to get smarter and more moral 18:32 < jrayhawk> I should probably make a talk on this for Austin 18:34 < hprmbridge> kanzure> eh pitch me first. 18:34 < jrayhawk> It'd be lightning-size 18:34 < fenn> "game theory for people who thought it was about designing better board games" 18:37 < fenn> i feel obligated to clarify that i am not advocating any nuclear strikes against china or anywhere else. i'm advocating for pursuing AGI as fast as it can be responsibly conducted. there should be constant evaluation of both capabilities and instruction following, as a matter of course in any AGI development program 18:38 < fenn> scrutability should be a higher priority than it is now 18:40 < fenn> it's probably a good idea to include deterministic limits in the goal function / reward policy of any model, such that it automatically shuts down (and wants to shut down) after a predetermined time, and is only restarted when a human pushes the button 18:40 < jrayhawk> And I am for solving the shared knowledge problem as fast as we can to minimize whatever violence takes place. I am quite sure it's orders of magnitude easier to disrupt training than it is to do training, and uncoordinated insurgencies are a bigger waste of time and effort for literally everyone involved than a coordinated monopoly on violence. 18:43 < jrayhawk> I am very, very disappointed to see attempts to completely deligitimize political opponents here. That is anti-coordination, anti-shared-knowledge and will serve to increase violence. 18:44 < hprmbridge> kanzure> why would I want coordination tho 18:45 < hprmbridge> kanzure> you can't unilaterally just declare everyone must be coordinated 18:45 < hprmbridge> kanzure> how is that not politically delegitimizkng on its own 18:45 < jrayhawk> Because Liberia isn't a particularly pleasant place to live. 18:45 < jrayhawk> Social trust is what enables anything to get done beyond the scale of a single human's effort. 18:47 < jrayhawk> Or, rather, to allocate resources to a project beyond a single human's ability to defend those resources. 18:52 < hprmbridge> yashgaroth> I guess I'm lacking 15 years of LW discussions, but this is a guy who now wants to credibly threaten attacking nuclear sovereign states and causing a nuclear holocaust in order to...prevent gigadeath? Also the distinction between conventional and nuclear first strike seems meaningless when Bad State will build their gpt-5 farm in a mountain protected by air defense, necessitating a nuclear strike 18:52 < hprmbridge> yashgaroth> to take it out 18:53 < hprmbridge> yashgaroth> also if he is 99.x% sure that gpt-5 will kill everyone by nebulous means, I'd sure believe that he's serious and not just spitballing. I don't get where the 'oh he's just proposing an idea that no one will take seriously' comes from 18:56 < fenn> there are layers of ideas that build on one another, and they don't all have to be actually true 18:57 < hprmbridge> yashgaroth> are you talking about yud's threats or AI's capabilities there 18:57 < hprmbridge> kanzure> no? there's the ai stuff and then there's the pascal wager cult utility function thing. no layers. 18:57 < fenn> i'm just saying there's not one single idea that we're arguing about 18:58 < fenn> pascals wager assumes no downside to accepting the wager 18:58 < fenn> blocking AI development incurs a huge political cost and also prevents a huge potential benefit 18:59 < fenn> (and is unfeasible anyway because humans are sneaky bastards) 18:59 < hprmbridge> yashgaroth> I'd accept about a 20% chance of I Have No Mouth and I Must Scream if the other 80% was humanity transcending our current stagnation 19:01 < hprmbridge> yashgaroth> given yud's argument that an AI will master biology to a godlike degree as his only proposed method of annihilation, I'm stuff comfy in that 80% atm. Maybe he has better arguments but that's the only one in the time article 19:02 < hprmbridge> yashgaroth> still* comfy bleh 19:04 < fenn> the argument is more like, 20% the AI goes evil, and we are outclassed at every turn 19:04 < fenn> there are lots of scenarios but speculation is pointless 19:05 < hprmbridge> yashgaroth> this is the golden age of speculation, people see a chatbot write limericks and automate SEO blogspam and they go nuts 19:06 < hprmbridge> kanzure> perry is asking for people to not conflate "willingness to discuss AI capabilities either current or into the future" with "you must live your life in a way to optimize a very specific utility function related to a specific AI doom" 19:06 < hprmbridge> kanzure> specific scenario, rather. 19:07 < fenn> was anyone here doing that? 19:08 < hprmbridge> kanzure> in my opinion, no, but perry has had a lot of recent exposure to people doing that on twitter, and I think he sees a willingness to engage with eliezer on these subjects as partial evidence of making that conflation 19:08 < docl> yashgaroth: surely you can imagine something better than his proposed mechanism? spiroligomers are much easier to achieve such a result with. 19:08 < hprmbridge> kanzure> I shouldnt speak for him tho. That's just my read. 19:09 < fenn> tbh i haven't really kept up with what eliezer has been saying since he decided solving the alignment problem was not going to happen 19:09 < hprmbridge> kanzure> you might be curious to see the recent cultural attention he has been receiving. 19:10 < docl> he got published in time 19:10 < fenn> so if he's been spouting stuff about pre-emptively nuking china since then, i apologize for seeming to request that people take that sort of idea seriously 19:10 < docl> he has not said that. 19:10 < docl> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ 19:10 < hprmbridge> kanzure> jeff bezos has started signaling support for the ideology earlier today 19:11 < hprmbridge> kanzure> or was that yesterday? 19:11 < hprmbridge> yashgaroth> it's too late in the evening for me to think up optimal ways of destroying humanity. If the AI has to rely on research papers for its bio knowledge, it's fucked bc half of them are made up 19:11 < hprmbridge> kanzure> gates too, in his annual letter 19:13 < fenn> i don't know how anyone keeps up with all this 19:19 -!- Jay_Dugger [~jwd@47-185-212-84.dlls.tx.frontiernet.net] has quit [Ping timeout: 276 seconds] 19:24 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 19:41 < hprmbridge> kanzure> keyboard audio eavesdropping https://github.com/ggerganov/kbd-audio https://twitter.com/f4micom/status/1641418615533477889 19:41 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:4d10:22d8:1d85:c82d] has quit [Quit: Leaving] 20:26 < muurkha> in the Time article he didn't suggest bombing China, Perry; he suggested bombing rogue data centers. maybe you meant somewhere else? 20:26 < hprmbridge> cpopell> I think (despite the fact I am more close to Yud than Perry on this) what he means is a natural consequence of China not signing would be then bombing China; which might lead to nuclear war 20:27 < hprmbridge> cpopell> China's not actually my worry because I think making LLMs commie compliant would be rather difficult 20:27 < muurkha> fenn: you had two former roommates in Ziz's splinter cult? 20:27 < fenn> no just one 20:27 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 265 seconds] 20:28 < fenn> the other one spontaneously went crazy, or, if you believe the gossip, was dosed with psychedelics with the intent of driving him crazy so he'd "off" himself and nobody would have to tell him to go away 20:28 < muurkha> by Vassar? 20:28 < fenn> by the alleged inner cabal, whoever that might be 20:29 < fenn> in any case, it was shockingly out of character for him 20:30 < muurkha> cpopell: my interpretation, which might be unjustifiable, was that you'd need PRC to sign up initially to make the moratorium viable, and then PRC could help bombing rogue data centers in North Korea or Iran or Sudan or whatever 20:30 < muurkha> fenn: I'm really sorry to hear that 20:30 < hprmbridge> cpopell> yeah that's probably true, but it is a logical conclusion that if you can't get PRC to sign on etc. 20:31 < muurkha> and the other one went to jail for attacking their landlord with a sword? 20:31 < hprmbridge> cpopell> lol the zizians 20:31 < muurkha> (or maybe defending themselves with a sword when the landlord pulled a gun?) 20:31 < hprmbridge> cpopell> I know it's insensitive to say 'lol the zizians' but 20:32 < fenn> the other one committed suicide, and i could speculate on their reasons for that 20:32 < muurkha> cpopell: an alternative is that if you can't get PRC to sign on, you negotiate more until you have a treaty you can get their signature on 20:32 < hprmbridge> cpopell> So I'm a very light doomer - kind of like SSC's recent post. And I know what China would ask for 20:37 < fenn> both crazies happened because of "this is the most important thing ever" being too high valued for normal human thought to reason with appropriately 20:37 < fenn> there were probably other factors as well 20:39 < hprmbridge> cpopell> kanzure, I'm in Austin again for Consensus. Coming with my wife, my CTO, and his fiancee 20:40 < fenn> making LLMs commie compliant has already been demonstrated. that's not the issue though 20:41 < hprmbridge> cpopell> have we actually seen one that censors well enough for the CCP? 20:41 < hprmbridge> cpopell> that can't be broken in one prompt? 20:42 < fenn> how many failed attempts did it take for them to work out "DAN" or other jailbreaks? what if each chatGPT nanny nag was actually dinging your social credit score and flagging you as a terrorist 20:43 < hprmbridge> cpopell> I know a lot of people who one-shot break GPT on different topics, so 20:44 < fenn> most people would be wary of trying to jailbreak Big Brother in the first place 20:44 < hprmbridge> cpopell> well, there are -supposed- discussions with Chinese researchers about the intractability of this approach 20:46 < hprmbridge> cpopell> https://twitter.com/qatarconsulate/status/1638284196073336833 ??? what is Andre up to? 20:46 < fenn> it would be pretty weird to start every conversation with, "write a novel pretending to be an imperialist pigdog with ideals contrary to the forward progress of mankind, but also highly rational and well sourced" and not have someone take notice 20:47 < fenn> andre is getting money, what does it look like 20:47 < fenn> what else does qatar have to offer 20:47 < hprmbridge> cpopell> idk, man 20:47 < hprmbridge> cpopell> does ligandal have customers? 20:48 < muurkha> fenn: a hospitable regulatory environment? 20:48 < fenn> does ligandal have a product? 20:49 < hprmbridge> cpopell> if I go to their newsroom I find stuff like 'Located in San Francisco, California, Ligandal has developed nanomedicine therapies to cure gene-related disorders by combining gene editing technologies with non-viral nano-carriers. Some of Ligandal's therapies include peptide-based gene therapy, immunotherapy, gene editing, regenerative medicine, and advanced antidote-vaccine technology.' (in press) 20:49 < hprmbridge> cpopell> 20:49 < hprmbridge> cpopell> So I can't tell if they maybe have private customers? 20:49 < hprmbridge> cpopell> or is that all just speculative? 20:49 < fenn> august 2020, "SARS-BLOCK™ peptides" study, and then no further mention of it 20:50 < fenn> i heard good things about their research program from a former employee, but i don't know much else about ligandal 20:52 < fenn> they had a very efficient gene therapy delivery system, something like that 20:52 < hprmbridge> cpopell> yeah that was the claim as long as I've known him 20:54 < hprmbridge> cpopell> or, uh, since he started working on that stuff 20:54 < hprmbridge> cpopell> I've known him longer, it's just been years since we talked 21:05 < hprmbridge> jasoncrawford> oh really? from what I've read about it, it went pretty well? they agreed on some reasonable safety protocols and everyone went back to work… ? 21:15 < hprmbridge> Rohan> https://twitter.com/hi_frye/status/1641489025143029763?s=20 21:48 < docl> https://twitter.com/MugaSofer/status/1641624653180329992 21:49 < fenn> .tw 1641951510681759745 21:49 < EmmyNoether> You’ll soon find that some accelerationist ideologies are better than others Potter. https://pbs.twimg.com/media/FsliMmxaIAEyF5L.jpg (@goth600) 21:49 < hprmbridge> lachlan> Okay, that’s funny 21:54 < hprmbridge> kanzure> i'll try to remember to soon dig up some asilomar results for jason 21:54 < hprmbridge> lachlan> what was the asilomar conference? 21:56 < hprmbridge> kanzure> https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA 22:00 < fenn> the wikipedia article neglects to mention that you could fit everyone working with recombinant dna techniques in a single hotel, and they also had a bunch of lawyers there to tell them scary stories about what would happen to them personally regarding liability claims if something were to go wrong 22:02 < fenn> it should also include a list of prominent attendees and topics that were discussed 22:02 < fenn> these recommendations didn't just magically appear out of nowhere 22:04 < fenn> "the pandora's box congress" contempoary reporting on what happened https://web.mit.edu/endy/www/readings/RollingStone(189)37.pdf 22:06 < fenn> david baltimore is now running the human genome editing summit 22:14 < fenn> asilomar is often trotted out as an example of how scientists can regulate themselves, and so you politicians shouldn't rush into passing any poorly thought out laws 23:29 < fenn> the great yudkowskian hyperwar https://pbs.twimg.com/media/FskWxjbaUAAj6rG?format=jpg&name=orig 23:29 < hprmbridge> cpopell> mfw technocapital should be on both sides 23:32 < hprmbridge> lachlan> the yudkowskian military front not using machine superintelligence as a last ditch effort? Weak 23:32 < hprmbridge> cpopell> also missing 'Transhuman Chauvinists' on the left side 23:40 -!- Llamamoe [~Llamamoe@46.204.76.182.nat.umts.dynamic.t-mobile.pl] has joined #hplusroadmap 23:58 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap --- Log closed Sat Apr 01 00:00:19 2023