--- Log opened Sun May 28 00:00:13 2023 00:01 < jrayhawk> ##hplusroadmap/2014-11-10.log.xz:01:04 < fenn> oh jeez now you're saying lamarckian evolution is true? 00:01 < fenn> there are also gene cassettes, viral incorporation of mRNA, bacterial conjugation, probably other mechanisms. these all accelerate the spread of genes in active use more than silent genes 00:10 < fenn> obviously groups are subject to the consequences of actions of their members 00:10 < jrayhawk> ? 00:11 < fenn> uh nevermind, probably shouldn't drag a 9 year old argument into the present 00:26 < fenn> in conclusion, i know slightly more about epigenetics now than ten years ago. there is still a lot to learn 03:20 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 03:30 -!- flooded [~flooded@146-70-115-163.pool.ovpn.com] has joined #hplusroadmap 03:34 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 265 seconds] 05:53 < fenn> https://neosensory.com/science/ seems like a bad decision to put it on the wrist but better than nothing 05:53 < fenn> "people who are deaf or hard of hearing can learn to identify sounds that are algorithmically translated into spatiotemporal patterns of vibration on the skin" 05:54 < fenn> they also discuss using it for artificial senses like infrared vision or abstract data streams 05:56 < fenn> the new iphones have a similar haptic device and cost less while doing more 06:03 < L29Ah> fenn: what haptic device? googling only suggests some vibration response settings akin to the stuff that is present in cellphones for over 30 years 06:04 < fenn> https://www.ifixit.com/News/16768/apple-taptic-engine-haptic-feedback 06:05 < fenn> this looks very close to what's shown in the neosensory buzz device https://www.precisionmicrodrives.com/wp-content/uploads/2021/10/lra-linear-vibrator-construction.original.jpg 06:06 < L29Ah> looks like an usual coin-type vibration motor that is used in smartphones for ~10 years 06:06 < L29Ah> what's so special about iphone's? 06:06 < L29Ah> the ifixit link lacks any photos of iphone's 06:07 < fenn> the article explains that it's bigger, and they tune the resnonant frequency of the oscillating voice coil mass to the resonant frequency of the phone body 06:08 < fenn> it's not a motor 06:08 < fenn> https://i0.wp.com/neosensory.com/wp-content/uploads/2021/09/8.22-Assembly-1.png looks similar 06:09 < L29Ah> https://www.precisionmicrodrives.com/wp-content/uploads/2021/06/coin-shaftless-vibration-motor-exploded-view.original.jpg oh indeed those are different 06:09 < fenn> that is a motor 06:10 < fenn> admittedly i can't actually tell from the cad rendering what's inside 06:14 < L29Ah> https://commons.wikimedia.org/wiki/File:IPhone_6s_-_Taptic_Engine_-_opened-93373.jpg 06:17 < L29Ah> https://upload.wikimedia.org/wikipedia/ar/f/f1/Taptic_Engines_sizes.jpg 06:21 < fenn> it's not like apple invented the concept, they're just the only ones to actually put it in their phones 06:22 < fenn> the reason i brought it up is the neosensory buzz wristband costs $1k 06:24 < fenn> each of those actuators is less than $10, so it seems like a bit much for "algorthms" 06:24 < fenn> the less algorithms the better in my opinion 06:24 < fenn> let the brain do the processing 06:25 < fenn> a lot of accessibility tech seems to focus on new users rather than peak performance, and that's a shame 06:26 < fenn> then you're on training wheels for the rest of your life 06:36 < jrayhawk> the other $990 is presumably for bureaucratic hoop-jumping for the FDA and FCC 06:38 < L29Ah> or they have a really small scale manufacturing 09:37 -!- juri__ [~juri@79.140.121.73] has joined #hplusroadmap 09:40 -!- juri_ [~juri@84-19-175-187.pool.ovpn.com] has quit [Ping timeout: 256 seconds] 09:41 -!- juri__ [~juri@79.140.121.73] has quit [Read error: Connection reset by peer] 09:41 -!- pharonix71 [~pharonix7@user/pharonix71] has quit [Ping timeout: 240 seconds] 09:42 -!- juri_ [~juri@79.140.121.73] has joined #hplusroadmap 09:43 -!- juri_ [~juri@79.140.121.73] has quit [Read error: Connection reset by peer] 09:43 -!- pharonix71 [~pharonix7@user/pharonix71] has joined #hplusroadmap 09:47 -!- juri_ [~juri@84-19-175-187.pool.ovpn.com] has joined #hplusroadmap 09:48 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 09:52 -!- flooded [~flooded@146-70-115-163.pool.ovpn.com] has quit [Ping timeout: 240 seconds] 10:06 < docl> https://www.alignmentforum.org/tag/mesa-optimization 10:33 < docl> huh, chatgpt is as about confused as me about the mesa- prefix as I am. since it's the spanish word for table. I guess you can picture the optimization happening on a surface, akin to the maxwell plaster cast of the thermodynamic surface 10:37 < docl> I tend to use gpt3.5 by default, might have to break out gpt4 for deeper insights... ah, it's suggesting mesa as similar to the geological term, something that rises above the landscape 10:39 < docl> OK, so human morals, reasoning, and so on are a mesa-optimizer where human evolution is the meta-optimizer 10:41 < docl> or maybe evolution is just a plain old optimizer, morals in general are a meta-optimizer, and specific morals / clusters of moral opinions in our society/societies are mesa-optimizer(s) 10:52 < docl> I have been exposing my daughter to numberblocks and other educational shows and teaching her counting concepts, since infancy really. she says some of the numbers out loud, although I'm guessing actually learning to coherently express math concepts like counting and addition is going to take a little longer. not optimizing purely for AI/AI safety here, there are lots of ways she can use math for life 10:52 < docl> success. 11:08 < docl> Here's a brief convo with gpt4 that seemed useful for clarifying wrt mesa-optimizers. https://chat.openai.com/share/09f6ba76-426d-462f-87d5-d3c94fb4cbfb 11:32 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 11:32 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 11:36 < docl> worth noting the 2d landscape (north-south / east-west) is a simplification and you can have arbitrary number of dimensions (weights/parameters), with the verticality being the loss function. so rather than (x,y,z) it's more like (w1,w2,...,L). so basically the mesa is an elevated (higher than the minimum amount of loss) chunk of conceptual hyperspace, not strictly planar 11:40 < docl> (sorry to belabor the obvious, I just figure there's probably someone in the chat who finds that sort of thing to be a stumbling block. good old school LW about this: https://www.lesswrong.com/posts/2TPph4EGZ6trEbtku/explainers-shoot-high-aim-low) 11:44 < docl> BTW sharable chatgpt convos are a thing now 13:22 -!- A_Dragon [A_D@libera/staff/dragon] has quit [Killed (Stx (*Kiss of death*))] 13:22 -!- A_Dragon [A_D@libera/staff/dragon] has joined #hplusroadmap 13:50 < hprmbridge> nmz787> https://www.science.org/content/article/inspired-sea-and-sky-biologist-invents-new-kind-microscope 13:50 < hprmbridge> nmz787> https://www.nature.com/articles/s41587-023-01717-8 13:50 < hprmbridge> nmz787> https://cdn.discordapp.com/attachments/1064664282450628710/1112483004795850812/41587_2023_1717_Fig1_HTML.webp 13:50 < hprmbridge> nmz787> https://cdn.discordapp.com/attachments/1064664282450628710/1112483012844724244/41587_2023_1717_Fig2_HTML.webp 13:50 < hprmbridge> nmz787> https://cdn.discordapp.com/attachments/1064664282450628710/1112483026874662962/41587_2023_1717_Fig3_HTML.webp 13:50 < hprmbridge> nmz787> "Inspired by the sea and the sky, a biologist invents a new kind of microscope 13:50 < hprmbridge> nmz787> The device can achieve clear images using samples suspended in any kind of liquid" 13:51 < hprmbridge> nmz787> "Reflective multi-immersion microscope objectives inspired by the Schmidt telescope" 13:54 < hprmbridge> nmz787> An hour of progress from yesterday and today (and Amazon overnight delivery)... My hydraulic quick connects are covered now https://cdn.discordapp.com/attachments/1064664282450628710/1112483994538360842/PXL_20230528_205239524.jpg 13:54 < hprmbridge> nmz787> https://cdn.discordapp.com/attachments/1064664282450628710/1112484024716378112/PXL_20230528_205255453.MP.jpg 13:55 < hprmbridge> nmz787> Choosing those rubber things took way too long, why these people don't all have dimensions in their listings is beyond me. Listing "it fits a john deere xxyyzz model" doesn't help when you don't have that model, but the fitting might not be uncommon. 15:03 < jrayhawk> ~/win 7 15:04 < jrayhawk> whoops 15:16 -!- Urchin[emacs] [~user@user/urchin] has quit [Ping timeout: 240 seconds] 16:36 < fenn> docl: "Mesa" comes from the Greek word "μέσος" (mésos), meaning "middle" or "intermediate." In the context of "mesa-optimizer," it refers to an optimizer that emerges as a byproduct of another optimizer's optimization process. it's optimizing for an intermediate metric 16:38 < fenn> docl: just use GPT-4, it's not worth your time to use GPT-3.5 to learn incorrect things. if you have a bunch of low-skill busywork like reformatting text, that's a good use for 3.5 16:39 < fenn> docl: also you should learn how to ask questions without leading the LLM 16:39 < fenn> less is more 16:40 < fenn> btw it's pronounced "mez-uh" 17:06 < fenn> jfc i had to ask GPT-4 what the first use of the term was because rationalists are just that bad at explaining where their neologisms come from 17:07 < fenn> this is the source (yudkowsky of course, who else) https://arxiv.org/pdf/1906.01820.pdf "Mesa-optimization is a conceptual dual of meta-optimization—whereas meta is 17:07 < fenn> Greek for above, mesa is Greek for below" 17:07 < fenn> (both incorrect etymologies) 17:10 < fenn> 'Possible misunderstanding: “mesa-optimizer” does not mean “subsystem” or “subagent.” In the context of deep learning, a mesa-optimizer is simply a neural network that is implementing some optimization process and not some emergent subagent inside that neural network. Mesa-optimizers are simply a particular type of algorithm that the base optimizer might find to solve its task.' 17:18 < fenn> this is the origin of the bad etymology and probably where yudkowsky originally got the terminology from https://web.archive.org/web/20180211110720/www.gwiznlp.com/wp-content/uploads/2014/08/Whats-the-opposite-of-meta.pdf 17:19 < fenn> https://www.wordhippo.com/what-is/the-meaning-of/greek-word-74cd0e2a648f7502284fe308d6ac0aca971525f8.html 17:20 < fenn> What does μέσα (mésa) mean in Greek? English Translation: inside 17:22 < fenn> so in conclusion, it's all sorts of fucked. the jargon is less helpful than just explaining what you mean, because there are so many misunderstandings around it. people can't be bothered to introduce their new jargon and properly link back to definitions 17:43 < fenn> also docl your link to "a conversation with gpt4" was actually gpt-3.5, which you can tell because of the green swirlie icon. gpt-4 has a black (or purple?) background. i wish they would give a lot more metadata than "Model: Default" 17:47 < fenn> hmm i'm confused now because that's the same answer gpt-3.5 gives but digging around in the html it says gpt-4 18:11 < muurkha> oh geez, neurolinguistic programming 18:14 < fenn> natural language processing seems more likely 18:15 < muurkha> nope, he's citing Bandler and Grinder 18:15 < muurkha> and also, for good measure, Grinder and Bandler 18:17 < muurkha> and on p. 2 he describes "NLP" as a "philosophical system" 18:17 < fenn> GPT-4 seems to flip flop between giving me the greek etymology and hallucinating a spanish etymology when it isn't told to be smart. it always (so far anyway) gives the greek answer when using my system prompt: https://fennetic.net/irc/GPT-4_etymology_with_prompt.yaml.txt https://fennetic.net/irc/GPT-4_etymology_without_prompt.txt 18:19 < muurkha> maybe instead of asking GPT-4 you could ask Liddell and Scott on Perseus 18:20 < muurkha> well, maybe you're more interested in LLMs than in the etymology 18:22 < fenn> it doesn't like unicode 18:23 < fenn> maybe you can get closer than this https://www.perseus.tufts.edu/hopper/morph?l=me/sa&la=greek 18:25 < fenn> yeah this is an atrocious dictionary lookup interface 18:29 < fenn> it's not in the dictionary in that form. here's the form chatGPT gave me (mesos) https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0058%3Aalphabetic+letter%3D*m%3Aentry+group%3D15%3Aentry%3Dme%2Fsos 18:39 < fenn> i might have more disdain for pseudoscience like NLP if mainstream psychology and psychotherapy weren't also infested with pseudoscience 18:44 < fenn> i'm guessing it went like this: yudkowsky googled "what is the opposite of meta" and landed on this page which had a nice simple answer and even offered an explanation 18:44 < muurkha> you'll note that that entry gives examples of its use as mesos, meson, mesoi, meses, mesites, mesou, messon, messoi, messoisi, mesos with an omega, mesaiteros, and mesaitatos, but never mesa 18:46 < fenn> if you understand greek conjugation, by all means judge away. i don't 18:46 < muurkha> that's because "mesa" is wrong and NLP practitioners have evidently branched out from pseudoscience to pseudoclassics 18:46 < muurkha> declension, not conjugation 18:46 < muurkha> it's a noun/adjective 18:47 < fenn> word hippo has a bunch of uses in modern greek 18:47 < muurkha> I've forgotten my declensions 18:47 < muurkha> of "mesa"? with an alpha at the end? 18:47 < fenn> yes 18:48 < muurkha> paste away! 18:48 < fenn> linked above, again for your convenience https://www.wordhippo.com/what-is/the-meaning-of/greek-word-74cd0e2a648f7502284fe308d6ac0aca971525f8.html 18:49 < muurkha> hmm, maybe it does exist in modern Greek then 18:50 < muurkha> https://en.wiktionary.org/wiki/%CE%BC%CE%AD%CF%83%CE%B1 says so 18:52 < muurkha> I still think meso- would be the proper form for -optimizers 18:53 < fenn> same 18:56 < fenn> the terminology is confusing because it leads one to believe that you're talking about sub-agents 18:56 < muurkha> mediocre agents 18:57 < muurkha> mediating agents 18:57 < fenn> it's also just ... one more "rationalist" bullshit neologism that nobody else says 18:57 < fenn> so then you have to go do the legwork which inevitably involves reading pages and pages 18:57 < fenn> and you're no better off in the end because the jargon is not used precisely anyway 18:57 < muurkha> between agents 18:58 < muurkha> during agents 18:58 < muurkha> inside agents 18:58 < muurkha> jailed agents: μπαίνω μέσα (baíno mésa, “I am overdraught; I go to jail”) 19:05 < fenn> i like abram demski's critique of the ontology. instead he proposes these two kinds of optimization: 19:05 < fenn> "Selection refers to search-like systems, which look through a number of possibilities and select one." 19:05 < fenn> "Control refers to systems like thermostats, organisms, and missile guidance systems. These systems do not get a re-do for their choices. They make choices which move toward the goal at every moment, but they don't get to search" 19:15 < fenn> "a controller with a search inside would typically have some kind of model of the environment, which it uses by searching for good actions/plans/policies for achieving its goal" 19:16 < fenn> "[its] search can only do as well as its model can tell it; however, the agent is ultimately judged by the true consequences of its actions" 19:16 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has joined #hplusroadmap 19:17 < muurkha> I don't think either of those categories includes standard optimization algorithms like gradient descent, the simplex method, or applying Newton's method to the derivative 19:18 < muurkha> or applying the quadratic equation to the derivative if it happens to be quadratic, etc. 19:18 < fenn> those are solidly in the "controller" class 19:19 < muurkha> no, because they don't make irreversible choices; all their choices get a re-do whenever desired 19:19 < fenn> hmm. i guess you could tune hyperparameters with gradient descent, okay 19:20 -!- test_ [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 240 seconds] 19:20 < muurkha> they don't work in terms of moments, they exist outside time and space 19:20 < muurkha> it's the opposite extreme from a missile guidance system 19:21 < muurkha> I conclude that Abram Demski is unfamiliar with the most basic aspects of optimization, so his opinions should be dismissed with extreme prejudice 19:22 < fenn> lol please take my paraphrasing with a grain of salt 19:22 < muurkha> you weren't quoting him? 19:23 < fenn> here's what he actually wrote: https://www.lesswrong.com/posts/WmBukJkEFM72Xr397 https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control 19:24 < fenn> if you really want to read 50 pages of discourse 19:24 * fenn goes for a walk 19:25 < muurkha> your summary seems to be correct 19:25 < muurkha> and he's talking about some classic stochastic optimization algorithms like simulated annealing 19:26 < muurkha> he just doesn't know about the deterministic optimizers that preceded them 19:28 < muurkha> which are called by the same name because they do the same thing 19:28 < fenn> i find it impossible to believe he doesn't know about gradient descent 19:29 < muurkha> or, for that matter, about the modern field of convex optimization 19:29 < muurkha> maybe he categorizes it as a "search-like system" 19:32 < muurkha> but he explicitly says, "The notion of a selection process says a lot about what is actually happening inside a selection process: there is a space of options, which can be enumerated; it is trying them" 19:32 < muurkha> you clearly can't enumerate the space of options of any continuous optimization algorithm, including gradient descent 19:33 < muurkha> nothing wrong with being clueless and writing about your own exploration process 19:34 < muurkha> but it's important not to confuse a novice thinking out loud with insights that advance a field 19:36 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 268 seconds] 19:36 < fenn> are you getting hung up on whether it's continuous or discrete? 19:36 < fenn> a selector can select from a continuous space of possibilities 19:41 < fenn> 'I don't want to frame it as if there's "one true distinction" which we should be making' 19:44 < fenn> the reason anyone's talking about this is because, supposedly, optimizing for one thing might accidentally create another thing that optimizes for a different goal, and it will go foom and kill us all 19:46 < fenn> none of this discussion has changed my initial favorable impression of, "AI risk is string theory for computer programmers" 19:46 < muurkha> you could conceivably enumerate a dense set like the rationals 19:48 < muurkha> hmm, I guess you can technically enumerate the computable numbers, and no optimization algorithm can compute an uncomputable number 19:48 < muurkha> so I guess I am forced to withdraw my objection 19:50 < muurkha> there are still optimization algorithms that don't try different options 21:22 < docl> fenn: ah, I led with the spanish word for table bit based on a prior convo with gpt3.5. gpt4 corrected that to the landscape feature analogy ([1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5C[1;5Cwhich seems to fit? looks like it was not the originally intended meaning though) 21:23 < docl> ugh, keyboard issue 21:23 < docl> anyways, thanks for the correction 21:23 < fenn> yes it's powerful at rationalizing previous context 21:25 < fenn> (this is not good) 21:25 < fenn> you can pretty much get it to say anything you want if you present it as a completion task 21:25 < fenn> first and foremost it's a text completion algorithm, that has been very slightly nudged in the direction of answering questions 21:27 < docl> but yes the link I gave was to the gpt4 convo. I guess there are some issues in the UI for shared convos still. You can sort of tell the difference based on GPT4 giving longer paragraphs in its answers. I usually use 3.5 for speedrunning and 4 for when I want further insights. 4 caps at 25 questions, and is slower to respond. 21:33 < docl> my original assumption was that mesa- was intended to be intermediate vs external, so I was surprised that 3.5 told me it was based on the spanish word for table, and further surprised that gpt4 corrected it to geographic feature (because the latter actually makes sense, it's just not uh, typical to how academical neologisms would normally be coined) 21:33 < fenn> i didn't want to give my phone number to what is effectively microsoft and unambiguously tie all my private thoughts to my personally-assigned surveillance adtech device, so i've been mooching off of other peoples' API access, which fortunately doesn't have the turn limit. it also seems to cost less with my usage patterns 21:34 < docl> er, internal vs external? something like that 22:25 -!- test_ [~flooded@146.70.195.99] has joined #hplusroadmap 22:29 -!- flooded [flooded@gateway/vpn/protonvpn/flood/x-43489060] has quit [Ping timeout: 268 seconds] 22:51 -!- TMM_ [hp@amanda.tmm.cx] has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] 22:52 -!- TMM_ [hp@amanda.tmm.cx] has joined #hplusroadmap 23:00 -!- Netsplit *.net <-> *.split quits: mrdata, otoburb, drmeister, yorick, mxz, user_ 23:00 -!- Netsplit over, joins: user_ 23:00 -!- yorick [~yorick@pennyworth.yori.cc] has joined #hplusroadmap 23:01 -!- mrdata [~mrdata@135-23-182-55.cpe.pppoe.ca] has joined #hplusroadmap 23:01 -!- Netsplit over, joins: drmeister 23:01 -!- mrdata [~mrdata@135-23-182-55.cpe.pppoe.ca] has quit [Changing host] 23:01 -!- mrdata [~mrdata@user/mrdata] has joined #hplusroadmap 23:02 -!- yorick is now known as Guest3564 23:02 -!- mxz [~mxz@user/mxz] has joined #hplusroadmap 23:03 -!- ANIMEX [~Malvolio@idlerpg/player/Malvolio] has quit [Ping timeout: 256 seconds] 23:05 -!- otoburb [~otoburb@user/otoburb] has joined #hplusroadmap 23:33 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 23:36 < Malvolio> .t https://lexfridman.com/neil-gershenfeld/ 23:36 < EmmyNoether> #380 – Neil Gershenfeld: Self-Replicating Robots and the Future of Fabrication | Lex Fridman Podcast --- Log closed Mon May 29 00:00:14 2023