--- Log opened Mon Mar 20 00:00:15 2023 01:22 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 01:54 -!- Guest67 [~Guest67@69.red-80-26-154.staticip.rima-tde.net] has joined #hplusroadmap 01:55 -!- Guest67 [~Guest67@69.red-80-26-154.staticip.rima-tde.net] has quit [Client Quit] 02:34 -!- Phoenix34 [~Phoenix@d75-157-96-194.bchsia.telus.net] has joined #hplusroadmap 02:37 < Phoenix34> Hello. I believe I have made a large advancement toward AGI. If anyone is interested or would like to chat about it please email me: phoenix.formless@protonmail.com 02:43 -!- Phoenix34 [~Phoenix@d75-157-96-194.bchsia.telus.net] has quit [Quit: Client closed] 03:58 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 04:09 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 04:49 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has quit [Read error: No route to host] 05:13 < hprmbridge> lachlan> kanzure: have you tried using whisper? Didn’t work well? 05:14 < kanzure> i haven't built my GPU temple yet 05:15 < hprmbridge> lachlan> afaik, whisper doesn’t need a huge amount of compute 05:40 -!- flooded is now known as _flood 06:20 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:4ce4:65bf:609c:13fc] has joined #hplusroadmap 06:25 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 06:26 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 07:51 < hprmbridge> Perry> Whisper will run just fine on a modern Macbook. 07:51 < hprmbridge> Perry> Or anything else really. 07:52 < kanzure> in the browser a whisper demo was taking like 1 minute per 10 seconds of audio-- i take about 10 seconds for every 12 seconds of audio when typing (okay not really- it depends on the subject but sometimes i can get ahead of the speakers) 08:04 < hprmbridge> Perry> Why do it in browser? 08:04 < hprmbridge> Perry> It runs locally just fine at high performance. 08:05 < hprmbridge> Perry> It screams on an M2 machine. 08:05 < hprmbridge> lachlan> Browser is probably getting you 20-30% of the perf compared to native 08:05 < hprmbridge> lachlan> Better simd, no vm overhead, multiple threads 08:07 < kanzure> fine, fine. 08:17 < kanzure> it's too bad that chatgpt-3.5-turbo can't clean up whisper output. it seems to want to complete it/predict the next token instead of following the instruction to break things into paragraphs. 08:18 < hprmbridge> eleitl> Everybody was AI hacking... 08:25 < lsneff> kanzure: try gpt4 08:42 < kanzure> gpt4 tries at least, but it doesn't quite get it: "So maybe Caitlin, I could sort of pick on you first as a legacy banking guy yourself. What's your kind of high level take on everything that's going on right now? Well, look, the traditional banking system has always been insolvent at a systemic level." (it doesn't realize someone else started speaking) 08:43 < kanzure> there's probably not that much sample text online of raw speech-to-text getting converted into well-formatted transcripts 08:45 -!- Malvolio [~Malvolio@idlerpg/player/Malvolio] has joined #hplusroadmap 08:51 < lsneff> That's honestly surprising to me. Could you send some sample text so I can mess around with prompting? 09:02 < kanzure> lsneff: https://diyhpl.us/~bryan/irc/chatgpt/caitlin-v-carter.txt and my transcript was https://diyhpl.us/wiki/transcripts/2023-03-17-on-the-margin-nic-carter-caitlin-long/ 09:03 < lsneff> I think whisper2 can add speaker tags 09:06 < kanzure> next one i'm looking at is https://diyhpl.us/~bryan/irc/chatgpt/eacc-e12-nathan-labenz.txt 09:08 < lsneff> I don't think I could process that tbh 09:10 < hprmbridge> Perry> There’s a whisper2??? 09:10 < hprmbridge> Perry> Since when? 09:10 < kanzure> don't blink 09:13 < hprmbridge> Perry> Oh, you just mean the large-v2 model from december? 09:15 < lsneff> hmm maybe I’m misremembering 09:30 < hprmbridge> lachlan> the discord bridge doesn't like apostrophes 09:32 < hprmbridge> eleitl> How many hardware resources and energy were used for GPT4 training (how long did it take as well), anyone knows? 09:32 < hprmbridge> Perry> No one will say what it took. 09:32 < hprmbridge> Perry> The paper says they won’t say for some vague AI safety reason. 09:33 < hprmbridge> eleitl> OpenAI is full of shit, as usual. 09:35 < nmz787> eleitl: now I've got that "everybody was kung fu fighting" song in my head 09:35 < hprmbridge> eleitl> ;p We're probably in diminishing returns territory for DeepML already, unless algorithmic advances happen. 09:35 < hprmbridge> Perry> I don’t see why we’re in diminishing returns. 09:36 < hprmbridge> Perry> The paper seemed to indicate that everything scaled exactly as expected. 09:36 < hprmbridge> Perry> Shockingly close to predictions. 09:36 < hprmbridge> eleitl> We're using large boxes full of giant dies (some already WSI) burning many MWh for months in giant DCs which are getting more expensive with generation. 09:37 < hprmbridge> Perry> the results are worth it so far. 09:37 < nmz787> Perry: are they? 09:37 < hprmbridge> eleitl> We've been able to afford the expenses, so far. 09:37 < nmz787> when customer service robots are no longer infuriating, I'll agree 09:38 < hprmbridge> Perry> Oh, they absolutely are. GPT-4 is writing code for me, and successfully so. 09:38 < hprmbridge> Perry> It’s doing lots of real work for real people. 09:38 < nmz787> not allowed by corporate policy here 09:38 < hprmbridge> Perry> And customer service robots are all far older than ChatGPT 09:38 < hprmbridge> Perry> Too bad for your company, it will have trouble because it will be behind by large factors compared to the productivity in other places. 09:39 < nmz787> the risk is IP leakage 09:39 < hprmbridge> Perry> They’ll have to adapt or die. 09:39 < nmz787> unless none of the competition use it either 09:39 < nmz787> (due to similar concerns) 09:39 < hprmbridge> Perry> The thing about free markets is new competitors appear all the time. 09:40 < hprmbridge> eleitl> The interesting part is whether we're saturating yet. I think we are, so we'll run out of money and energy to scale much further. Unless algorithmic advances, which likely will require new hardware. 09:40 < hprmbridge> Perry> Mercury set up a treasury management function and an automatic sweep account mechanism to guarantee deposits up to $5M only days after the SVB collapse. Will I use them for my business banking or will I use Wells Fargo? Guess. 09:41 < hprmbridge> Perry> I don’t think we’re saturating yet. 09:41 < lsneff> words just words 09:41 < hprmbridge> Perry> What is “words just words”? 09:41 < hprmbridge> eleitl> Do you have numbers, Perry? 09:41 < hprmbridge> Perry> Numbers for what? The GPT-4 paper does give some of their findings… 09:41 < lsneff> it's irrational to assume that we're suddenly going to cap out on transformers right here 09:42 < nmz787> electrical transformers? 09:42 < nmz787> mining constraints is a major worry of electrification of automobiles in general 09:42 < hprmbridge> Perry> Surely you know what a transformer is? 09:42 < hprmbridge> Perry> Not an electrical component. 09:42 < lsneff> transformers are the model used by LLMs 09:42 < hprmbridge> Perry> The architecture. 09:42 < nmz787> will have to google LLM 09:42 < lsneff> ^ 09:43 < hprmbridge> Perry> You will have to google LLM? 09:43 < hprmbridge> Perry> Really? 09:43 < nmz787> something about law degrees is top hit for me 09:43 < hprmbridge> Perry> You’re joking about all of this right? 09:43 < nmz787> no 09:43 < nmz787> adding transformer gets me something more ML esque 09:43 < hprmbridge> Perry> You’re way, way behind. 09:43 < hprmbridge> Perry> LLM = Large Language Model. 09:44 < nmz787> meh, I live on 75 acres at the edge of the wilderness 09:44 < nmz787> I am hedging my bets 09:44 < hprmbridge> Perry> The internet comes to your 75 acres so you have no reason to be any more out of touch than someone in central Paris or San Francisco. 09:44 < hprmbridge> Perry> I presume that being here means you’re interested in this stuff. Well, it’s going vertical right now, so keeping up is kind of a survival thing. 09:45 < nmz787> I spend a lot of time outside when I'm not doing #dayjob in semiconductor design 09:45 < nmz787> (physical design, not logic) 09:46 < hprmbridge> Perry> So physical design is going to be done by machines soon. 09:46 < hprmbridge> eleitl> nmz787, are you in the loop what kind of machine resources learning a major model a la GPT4 takes? I haven't been looking at it for years, so I'm out of the loop. No time to read https://old.reddit.com/r/mlscaling/ either. 09:46 < hprmbridge> Perry> Indeed, I suspect it might be done by gradient descent *without* using AI 09:46 < nmz787> heh, yeah when the other humans adopt a Domain Specific Language for Design Rules 09:47 < hprmbridge> Perry> Maybe. Right now my programmer buddies are all having machines write code for them all day. 09:47 < hprmbridge> Perry> And I am too. 09:47 < nmz787> eleitl, not really, I've heard 1000 or 10000 GPU cards for one of the recent AIs 09:47 < hprmbridge> Perry> Don’t assume that being a human is going to be so important for long. 09:47 < kanzure> don't argue on twitter https://twitter.com/ZeframM/status/1637854649473703936 09:47 < hprmbridge> Perry> No one uses GPU cards for this at OpenAI, they use Cerebras hardware. 09:47 < kanzure> although he does offer this ASML EUV video https://www.youtube.com/watch?v=5Ge2RcvDlgw 09:48 < hprmbridge> eleitl> Cerebras is WSI, so no longer any free lunch there. 09:48 < nmz787> hehe we joked about cerebras and yield needing to swap from dies/wafer to wafers/die 09:48 < hprmbridge> Perry> Not the point, they’re not using thousands of GPU cards. 09:49 < hprmbridge> eleitl> The point is that WSI is the end of the rainbow. 09:49 < hprmbridge> Perry> And the exact amount they need is important. 09:49 < hprmbridge> Perry> It’s not the end of the rainbow, they can go to 3D layering. 09:49 < hprmbridge> Perry> And the machines are going to be helping very soon. 09:49 < kanzure> nmz787: we will have some presentations in austin about LLMs and some live demos. if you haven't used chat.openai.com/chat then it's worth trying out for a few minutes. 09:50 < nmz787> kanzure I think it wanted my phone number, and I don't want to give it to them 09:50 < hprmbridge> lachlan> @kanzure EUV machines are conceptually not too difficult to understand, but practically extremely hard to make work 09:50 < hprmbridge> eleitl> Machines can't change physics. At least, physics as we know it. QC isn't going places either, so far. 09:50 < kanzure> pmetzger: by any chance do you know names of the custom ASIC hardware startups doing tensor compute (not google's TPU) or other things. i think there's one more other than cerebras. 09:51 < nmz787> I'm holding out to learn "new programming" until quantum compute gets bigger 09:51 < hprmbridge> Perry> On the other hand, some players like the chinese 1. have stolen plans and 2. have extreme motivation. 09:51 < kanzure> lsneff: yes i am curious about what specifically is hard. getting clean UV light source? i'm sure that's hard- but is there anything else? 09:51 < nmz787> kanzure: didn't you see the pics of the lens manufacturing I posted last week? 09:51 < hprmbridge> Perry> kanzure: there are a bunch. Like there’s Samba Nova and others. 09:51 < kanzure> nmz787: anyone can pose with large equipment big deal 09:51 < nmz787> the processing chamber is big enough to house 15 migrant workers 09:52 < hprmbridge> lachlan> isn't that beffjezos guy on twitter doing another ml chip startup? 09:52 < nmz787> kanzure: do you know how the actual photons are generated? 09:52 < nmz787> a falling droplet of metal, hit with laser, to plasmafy and shine 09:52 < hprmbridge> lachlan> you have to hit it twice actually 09:53 < nmz787> that alone requires a ton of engineering 09:53 < kanzure> nmz787: if you don't want to try chatgpt then you could at least watch some videos https://www.youtube.com/watch?v=krWCO62Xi1c&t=36s 09:53 < nmz787> regardless of the "lensing" (which are all diffractive, not refractive) 09:53 < kanzure> lsneff: i don't know what beffjezos is doing. 09:55 < hprmbridge> eleitl> What is the price tag for CS-2? 23 kW and several MUSD, or is that for a cluster of CS-2? 09:57 < nmz787> kanzure: idk, that video isn't anything that impressive to me 09:57 < nmz787> seems like the top search hits google would give me for those prompts 09:59 < kanzure> eleitl: not an exact number on gpt-4 training cost but there was this thing from morgan stanley https://www.reddit.com/r/mlscaling/comments/11pnhpf/morgan_stanley_note_on_gpt45_training_demands/ 09:59 < nmz787> eleitl "We do know that a single Cerebras CS-2 costs several million dollars, so this elegance, and simplicity, comes at a substantial capital cost.Sep 14, 2022" 09:59 < nmz787> from forbes.com 10:00 < hprmbridge> eleitl> This looks reasonably current https://www.businesswire.com/news/home/20221114005138/en/Cerebras-Unveils-Andromeda-a-13.5-Million-Core-AI-Supercomputer-that-Delivers-Near-Perfect-Linear-Scaling-for-Large-Language-Models 10:01 < hprmbridge> eleitl> Given that's racks full of WSI, stacking and new processes don't give you much scaling headroom. So it's adding more racks and more MW. Which stops rather soon. 10:02 < nmz787> WSI can stack 10:02 < hprmbridge> eleitl> Yes, but stacking scaling post-Moore isn't giving you all that much. 10:03 < hprmbridge> nmz787> We're not really stacking much these days tho 10:04 < hprmbridge> eleitl> Seems money and DC power is the bottleneck, until we figure out how to go past deep learning. Anyone knows who's doing signficant research there? 10:04 < hprmbridge> nmz787> Stacking a mm thick chip in a multi mm thick package affords a lot. Much of the benefit is reduced interconnect parasitics (resistance, meaning increased bandwidth, and/or lower power usage) 10:07 < hprmbridge> nmz787> New processes are the heart of innovation... Reducing power consumption, increasing bandwidth, avoiding mineral/consumable shortages/expense 10:09 < hprmbridge> eleitl> Well Moore has been dead for a while, and stacking only buys you that much. I haven't been able to find (jesus are search engines shit these days) whether CS-2 already uses stacking. 10:09 < L29Ah> https://www.vice.com/en/article/wjkgz9/human-blood-sausage-cannibalism-wtf 10:09 < L29Ah> .t 10:10 < L29Ah> #shitops 10:16 < nmz787> they keep saying moore is dead, and then flip back to "more than moore" 10:16 < nmz787> a slide the other day had a collection of "moore's law is dead" style quotes going back to like the 1800s 10:16 < hprmbridge> eleitl> Seems there is some research off the beaten track https://old.reddit.com/r/mlscaling/comments/11o82m2/spikegpt_largestever_spiking_neural_network_260m/ but I don't have time anymore to look into it. Anyone who is, can you provide a very short review. 10:17 < hprmbridge> eleitl> Moore (constant doubling of *affordable* transistors aka linear semi-log) is most assuredly dead. 10:18 < nmz787> slap anther wafer on top, you just doubled 10:18 < nmz787> it's per IC 10:18 < hprmbridge> eleitl> Then 2, 4, 8, 16... now what? 10:19 < hprmbridge> eleitl> Your WSI is constant price, so you've got linear scaling, and a pretty low ceiling. The expense and the MWh are gonna kill you. 10:21 < hprmbridge> eleitl> Let's say 10 GUSD for a DC, and some 100 MW. After you've filled that thing up with your stacked WSI, where are going going? 10:28 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has joined #hplusroadmap 10:34 < jrayhawk> moores law is dead, so long as we redefine moores law and also the word "is" 10:35 < jrayhawk> all the interesting constraints right now are in heat budgets rather than transistor count, and we have plenty of options there 10:36 < hprmbridge> lachlan> the cost stops us from having hugely stacked chips running at low frequencies? 10:36 < hprmbridge> Perry> You can lower energy costs by lowering clock rate a bit; dynamic power scales quadratically in clock rate. 10:37 < hprmbridge> Perry> And you can start actively layering without stacking, going to 3D the way memory technologies are now going. That could last for quite a while. 10:37 < hprmbridge> Perry> And once we have actual AGI I think MNT shows up quickly. 10:38 < hprmbridge> Perry> The cost of WSI depends on the volume you’re making. Start scaling the factories etc. and those things get cheaper too. They’ll never be pennies but they don’t have to be millions a wafer. 10:39 < hprmbridge> Perry> Anyway, in spite of Eugen’s pessimism I’m not seeing much of a wall here, at most some speed bumps. 10:41 < hprmbridge> kanzure> i think silicon fab capacity will be an issue. it's not like we build spare excess capacity.... 10:41 < hprmbridge> Perry> Fab capacity in a larger sense will be a limiter from now until the end of time. 10:42 < hprmbridge> Perry> or at least computronium availability. 10:42 < hprmbridge> Perry> But as it becomes profitable to build more fabs more will be built. Assuming lunatics don’t blow them up. 10:42 < hprmbridge> kanzure> that's why i was looking at ASML. is this actually hard? or just intellectual property brainworms? 10:42 < hprmbridge> lachlan> total molecular assembler throughput 10:43 < hprmbridge> Perry> I think it’s hard but not unduplicatable. 10:43 < hprmbridge> kanzure> why not do that? seems like a reasonable startup..... 10:43 < hprmbridge> Perry> I don’t have 10 or 20 billion lying around. 10:43 < hprmbridge> kanzure> checks all the boxes 10:44 < hprmbridge> Perry> And I have no special expertise in the area. Other people will do it. 10:44 < hprmbridge> lachlan> Other techniques that are a lot more doable for cheaper 10:44 < hprmbridge> kanzure> but other people so far haven't. i dunno. 10:45 < jrayhawk> https://www.semiaccurate.com/assets/uploads/2022/05/Intel-Ponte-Vecchio.jpg intel's been reconceptualizing layering and chips 10:53 < hprmbridge> kanzure> https://progressforum.org/posts/g9KcwhSPh6JPhXqua/a-catalog-of-big-visions-for-biology 10:53 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 10:54 < hprmbridge> kanzure> https://progressforum.org/posts/f3dsHomctgaJXeQWp/patrick-collison-on-supercharging-science 10:55 < hprmbridge> eleitl> jrawhawk: I'm talking about Moore's original publication. You're talking about something else. Call it whatever you want, but not Moore scaling. 10:55 < hprmbridge> Perry> Moore said transistors per chip would double per unit time. We are still getting die shrinks even now. 10:55 < hprmbridge> Perry> What we’re not getting is denard scaling. 10:56 < hprmbridge> Perry> The process shrinks are going to end but then you go to more and more layers on the chip into 3D and that doubles transistors for you. 10:56 < hprmbridge> Perry> That’s already been done for RAM and flash memories. 10:56 < hprmbridge> Perry> I can’t remember where flash is at but some of them have 64 or more layers, I haven’t checked in a while. 11:04 < jrayhawk> the specific verbiage for moore's metric was "components per integrated function" in 1965 and "components per chip" in 1975 11:04 < jrayhawk> he concluded that size and density were less interesting than the combination of the two 11:06 < nmz787> kanzure: we don't build excess fab capacity because that doesn't make economic sense. That's just simple supply and demand, really 11:06 < hprmbridge> kanzure> yeah of course. but it would be better if we could drop the cost of this fab stuff by a factor of 1,000x. 11:08 < jrayhawk> resources spent on fabs today are resources that can't be spent on better fabs tomorrow 11:08 < nmz787> process shrinks will continue for a while longer... we're going to nanoribbons/GAA now, and next will be transistor stacking, then likely 2d sheet technology (MoS2 seems plausible for performance, idk about materials economics) 11:09 < hprmbridge> lachlan> nmz787: how far out are those? 11:09 < nmz787> recently I realized the best way it seems to bring the future, faster, is to upgrade cell phones or TVs or laptops more often than I've been used to 11:11 < nmz787> lachlan I think we want to start selling GAA in like 2 or 3 years (sorry I don't keep up with the marketing too much) 11:12 < nmz787> there's new innovation in EUV even, largely getting traction over the last year or so (high NA)... which is about scaling, yield improvement, and reduction of process complexity 11:13 < nmz787> the quantum stuff is another parallel track 11:13 < jrayhawk> kanzure: similar principle: if you have a million dollars with which to solve a well-understood computational problem, you're best off buying no hardware until projections for the computation timeframe are <24 months as technology improves 11:14 < jrayhawk> with fabs, capital expenditure has to be matched proportionately to the limited technology window 11:18 < kanzure> the chip design lifecycle is so long that right now projects for tapeout for 2025 are just getting started on the design side 11:18 < kanzure> fabs are probably fully spoken for by the time they turn on 11:18 < kanzure> (or before they get built at all?) 11:21 < jrayhawk> intel's process/design -> C-stepping lifecycles are closer to a decade, i think 11:22 < kanzure> that's ungood. maybe an auction system to outbid other projects on the wafer line. 11:23 < kanzure> and then projects gte priority. 11:23 < kanzure> get 11:25 < nmz787> I don't think I've seen any decades long products, unless they get process-shrunk 11:26 < nmz787> kanzure: why wouldn't the fabs be spoken for by the time tapeout happens? the tapers-outers want their chip fabbed, thus get on the schedule 11:27 < nmz787> unused fab capacity is a loss to the company 11:28 < nmz787> tinytapeout scale shuttles, where people are using lots of synthesis avoid much of the layout complexity that comes with larger designs 11:28 < nmz787> but a 1mm sq die isn't very realistic for the majority of CPUs/SoCs out there 11:29 < nmz787> sure it's OK for a few 100 or 1000 gates, or a small analog block 11:30 < jrayhawk> there's also a long iterative feedback cycle in improving yield and optimizing compnent constraints while designing around those constraints; major architectural decisions about clock domains and pipelining come down to field effects like how much gate leakage occurs 11:32 < jrayhawk> physical tradeoffs informs logical tradeoffs and logical tradeoffs inform physical tradeoffs 11:33 -!- Phoenix92 [~Phoenix@d75-157-96-194.bchsia.telus.net] has joined #hplusroadmap 11:34 < jrayhawk> nmz787: jblake's been there since, what, 2015, and his architecture is still years from release, right? 11:35 < nmz787> I believe a few of the architectures he's worked on have been released 11:35 < nmz787> or scrapped/recajiggered/rebranded 11:35 < nmz787> pretty sure this is one he worked on https://en.wikipedia.org/wiki/Sunny_Cove_(microarchitecture) 11:37 -!- Phoenix92 [~Phoenix@d75-157-96-194.bchsia.telus.net] has quit [Client Quit] 11:37 < jrayhawk> he said he had minor contributions to 11th and 12th gen 11:37 < jrayhawk> er, no, 12th and 13th 11:39 < hprmbridge> cpopell> o7 11:40 < kanzure> welcome back cpopell 11:44 < hprmbridge> cpopell> hallo again, been a while 11:44 < hprmbridge> cpopell> @yashgaroth bro 11:44 < hprmbridge> yashgaroth> ayyyyyyyyyyyyyy 11:47 < hprmbridge> cpopell> where you at these days? 11:47 -!- Voyager [~Voyager@075-134-000-004.res.spectrum.com] has joined #hplusroadmap 11:59 < kanzure> https://rootsofprogress.org/progress-humanism-agency 12:00 < hprmbridge> kanzure> @an1lam presumably you saw that one on human agency^ 12:02 < hprmbridge> jasoncrawford> 👋 hi all! I write rootsofprogress.org 12:03 < kanzure> an1lam was talking about https://stephenmalina.com/post/2023-01-11-viriditas-dialogue/ 12:26 < kanzure> strange article about xenotransplantation https://www.growbyginkgo.com/2023/02/16/farm-to-operating-table/ 12:34 -!- cthlolo [~lorogue@77.33.23.154.dhcp.fibianet.dk] has quit [Read error: Connection reset by peer] 13:19 < hprmbridge> lachlan> Welcome @jasoncrawford ! 13:21 < kanzure> @jasoncrawford did you see https://diyhpl.us/wiki/transcripts/2023-03-03-twitter-bayeslord-pmarca/ 13:44 < hprmbridge> jasoncrawford> no! very interesting, will read 13:45 < kanzure> pmarca in context is marc andreessen 14:07 < hprmbridge> jasoncrawford> of course! 14:11 < kanzure> there is also the grimes one 14:13 -!- helleshin [~talinck@108-225-123-172.lightspeed.cntmoh.sbcglobal.net] has joined #hplusroadmap 14:51 -!- Voyager [~Voyager@075-134-000-004.res.spectrum.com] has quit [Ping timeout: 255 seconds] 14:55 < jrayhawk> "Amjad: [...]AGI mythology is very intwined with our history going back thousands of years. There's lots of mythology around technology we can't control," literal demon summoning 14:56 < jrayhawk> or djinn 14:56 < hprmbridge> kanzure> tomorrow I'll be doing this twitter spaces on anti-aging at 11am PST https://twitter.com/MakePeople_Film/status/1637931559922843648 14:58 < jrayhawk> does sean wrona accept transcription work still 14:59 < hprmbridge> kanzure> i could never get him to do it; maybe times have changed? 15:01 < hprmbridge> kanzure> last I saw he was doing videos on optima fingerwork https://youtu.be/TSftAHT_2uA?t=14m 15:01 < Muaddib> [TSftAHT_2uA] How to Type: Keyboard Geometry (25:09) 15:07 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 15:08 -!- Voyager [~Voyager@075-134-000-004.res.spectrum.com] has joined #hplusroadmap 15:11 < jrayhawk> we need to buy him some twiddlers and ship him to thesushidragon's warehouse 15:20 < fenn> moore's law as instructions per second per dollar through 2022: https://www.flickr.com/photos/jurvetson/51391518506/ 15:20 < fenn> transistors per unit area is not that interesting by comparison 15:21 < hprmbridge> Perry> Moore’s law is transistors per device, not IPS per dollar. 15:21 < hprmbridge> Perry> Not that IPS per dollar isn’t an interesting thing to track and it’s clearly also exponential. 15:25 < hprmbridge> an1lam> yep saw it. connection being choosing direction to go? 15:27 < hprmbridge> kanzure> sure, yes, but also jason wrote that (and joined today) 15:37 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has left #hplusroadmap [] 15:39 < jrayhawk> there was some good content in that video. could've been about about half of the length, though 15:50 -!- Voyager [~Voyager@075-134-000-004.res.spectrum.com] has quit [Ping timeout: 255 seconds] 16:05 -!- L29Ah [~L29Ah@wikipedia/L29Ah] has joined #hplusroadmap 16:38 -!- Voyager [~Voyager@075-134-000-004.res.spectrum.com] has joined #hplusroadmap 16:45 -!- balrog [znc@user/balrog] has quit [Read error: Connection reset by peer] 16:46 -!- balrog [znc@user/balrog] has joined #hplusroadmap 16:51 -!- balrog [znc@user/balrog] has quit [Read error: Connection reset by peer] 16:52 -!- balrog [znc@user/balrog] has joined #hplusroadmap 17:24 -!- Voyager [~Voyager@075-134-000-004.res.spectrum.com] has quit [Quit: Leaving] 17:26 -!- balrog [znc@user/balrog] has quit [Read error: Connection reset by peer] 17:29 -!- balrog [znc@user/balrog] has joined #hplusroadmap 18:46 < kanzure> "AlphaLink: Protein structure prediction with in-cell photo-crosslinking mass spectrometry and deep learning" https://www.nature.com/articles/s41587-023-01704-z https://twitter.com/slavov_n/status/1637935744592273410 19:08 < kanzure> why are people running around collecting blood from rats all of the sudden 19:13 < fenn> to make vegan rat blood sausage 19:13 < superkuh> Because they're sars-cov-2 carriers? 19:14 < superkuh> Probably easier to collect their poop for surveying rat sars-cov-2 mutants though. 19:18 < kanzure> i think it's that hutch blood rejuvenation stuff 19:18 < fenn> "It also raises the possibility that, as the ever-changing virus (covid-19) adapts to new and very different hosts, it will evolve in ways that make it unrecognizable to humans who thought they had become immune to it." 19:18 < kanzure> hutch.. or something. 19:21 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0:4ce4:65bf:609c:13fc] has quit [Quit: Leaving] 19:21 < fenn> harold katcher? 19:22 < fenn> https://joshmitteldorf.scienceblog.com/2022/06/27/lifespan-of-harold-katchers-rats/ 19:31 < kanzure> i think that's why. 19:33 -!- Malvolio is now known as GLYPHORIA 21:09 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 252 seconds] 21:12 < fenn> a humanoid robot picking boxes off a shelf (agility robotics) https://twitter.com/simonkalouche/status/1637889340423413770 21:14 < fenn> did this company come out of nowhere? 21:18 < fenn> they added arms to the cassie chassis in early 2019 21:25 < fenn> the hospital green is to hide blood stains 21:31 < fenn> "Optimized Runtime: Digit works 16 out of 24 hours ... Autonomous Charging: Digit connects itself to its docking station when it needs to charge" 21:31 < fenn> is a hot swap battery really that difficult 21:36 < fenn> 1 minute overview of the agility robotics development history https://www.youtube.com/watch?v=rnFZAB9ogEE 21:37 < fenn> no gripper hand because it's too dumb to use it anyway? --- Log closed Tue Mar 21 00:00:16 2023