--- Log opened Sun Oct 02 00:00:35 2022 01:53 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has joined #hplusroadmap 04:44 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0::1909] has joined #hplusroadmap 05:00 < kanzure> heath: ok now run it in a browser 05:31 -!- spaceangel [~spaceange@ip-78-102-216-202.bb.vodafone.cz] has joined #hplusroadmap 08:31 -!- darsie is now known as darkie 08:35 -!- darkie is now known as darsie 08:50 < kanzure> "Co-writing screenplays and theater scripts with language models: Dramatron" https://arxiv.org/abs/2209.14958 10:11 < L29Ah> Researchers from the University of Eastern Finland tracked 2,300 middle-aged men for an average of 20 years. They categorized the men into three groups according to how often they used a sauna each week. The men spent an average of 14 minutes per visit baking in 175° F heat. Over the course of the study, 49% of men who went to a sauna once a week died, compared with 38% of those who went two to three 10:11 < L29Ah> times a week and just 31% of those who went four to seven times a week. 10:11 < L29Ah> apparently, sweat-generating activities tend to increase lifespan 10:12 < L29Ah> i wonder if sweat is a primary mean of removing certain harmful molecules from blood plasma 10:13 < muurkha> that was a common belief in the 19th century 10:13 < muurkha> there were lots of sweat therapies then 10:18 < muurkha> but the results as you described them do not justify the inference that sauna extends life 10:19 < muurkha> it might be that sauna attendance in Finland is a good indicator of health status among middle-aged men 10:19 < muurkha> which, on its face, seems more plausible than the idea that sweating can reduce your death risk by 40% 10:32 < L29Ah> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5941775/ RCTs show positive effects but they are all too short 10:43 < TMA> I have read (I know, {{cite}}) that it is not the sweat but the heat (or heat shock), that is beneficial. 10:45 < TMA> Something, something, good for hearth, something, ... 10:45 < TMA> or heart perhaps. Memory's hazy on the details 10:46 < muurkha> beans, beans, they're good for your heart; the more you eat, 13:00 -!- codaraxis___ [~codaraxis@user/codaraxis] has joined #hplusroadmap 13:04 -!- codaraxis__ [~codaraxis@user/codaraxis] has quit [Ping timeout: 265 seconds] 13:28 -!- gnawk [~gnawk@76-196-232-104.lightspeed.sndgca.sbcglobal.net] has joined #hplusroadmap 15:17 -!- spaceangel [~spaceange@ip-78-102-216-202.bb.vodafone.cz] has quit [Remote host closed the connection] 15:21 < kanzure> http://bayareamicrofluidicsnetwork.org/ 15:29 < juri_> hey, i know it's a long shot, but i'm having a problem with 2D PGA, FPUs, and how to establish error ranges dynamically, that i could use some serious eyeballs on. I've written up a forum posting about it: https://discourse.bivector.net/t/papers-books-notes-on-numerical-stability-for-2d-pga-with-fpus/611 . tl;dr: i'm having problems establishing the error range of really steep line intersections, and 15:29 < juri_> could use some minds sharper than mine. 15:36 < muurkha> this is a thing I have often wondered about, but as far as I've gotten on it is that there are some battle-tested interval arithmetic libraries out there and I think even a few reduced affine arithmetic libraries (which can provide better error bounds than plain interval arithmetic) 15:36 < muurkha> but I don't think there are any for Haskell and I don't know if Haskell offers enough control over FPU rounding modes to reimplement them 15:38 < muurkha> I don't have a mind as sharp as yours, though! 15:38 < juri_> I think it does. i'm (ab)using them. 15:39 < juri_> i just don't know what to do with the values, now that i have them. 15:39 < juri_> my current thought is to do some statistical analysis, but at least one person has volunteered to throw a neural network at the problem. 15:40 < juri_> this is just not my area. 15:41 < muurkha> you can definitely do statistical analysis of your errors but I don't think that will help you not have error :) 15:41 < juri_> i'm using ULP tracking at every time i touch my values, and am draging around the ULP with the value, through the program. now, i just need to figgure out what to do with it. 15:42 < muurkha> I think the standard approach to interval arithmetic is a little different 15:42 < juri_> right now, i have a property test i can always guarantee should be true, and a single value, that if its big enough, will do that job. i just need to know when to use a big value, when to use a small one, and how to make that decision based on the inputs available. 15:45 < muurkha> the standard approach is that when you, say, multiply two intervals [a, b] · [c, d], you do a case analysis on signs to figure out which of the four products ac, ad, bc, bd to compute, and with which rounding modes (toward -∞ or +∞), and compute their min and max as the new interval 15:46 < muurkha> if a < 0 < c and b < 0 < d for example then you only need to compute two products, which I think are ad with rounding toward -∞ and bc with rounding toward +∞ 15:47 < muurkha> but in some other cases you have to compute all four products 15:47 < juri_> weird. i wonder why they do it that way. according the the x86_64 manual, the maximum error of a multiply is one ULP. there are funny cases where the ULP changes depending on the value, so to make sure i cover those, i do the operation in 'closest' mode, then do it again in 'toward +', and use the ULP of the toward +. 15:48 < muurkha> if you want to represent [a, b] as {estimate: ½(a+b), error: ½(a-b)} that is equivalent but less common 15:48 < juri_> i then treat the ULP as plus-or-minus range of my value. 15:48 < muurkha> as I understand it, the IEEE standard guarantees that the maximum error for +, -, ×, ÷, and √ is half an ULP 15:48 < juri_> yep. i'm fine overestimating. 15:49 < muurkha> and I think amd64 also makes that guarantee for FMA 15:49 < juri_> not seeking perfection here. just trying to get an engine that works. 15:49 < muurkha> right 15:49 < muurkha> I guess I should have said ½|a-b| or ½(b-a) 15:50 < juri_> the problem now is what do i do with it? i've figgured out how to use it to find out how much error is in a line at point X, but i don't know how to use it to determine the error circle around an intersection of two lines. 15:51 < muurkha> the idea with interval arithmetic is that you just use the normal formulas (factored to minimize repetition) but with interval-arithmetic operations instead of regular arithmetic 15:51 < juri_> small values of fudge factor are fine for small angles, but the steeper the angle, the more likely error is significant. i just need to find that relationship. 15:52 < muurkha> I haven't really done this though except for tiny toy programs 15:53 < muurkha> so in this conversation I'm just a librarian 15:53 * juri_ nods. 15:53 < juri_> i haven't done any of this before, so.. :D 15:53 < juri_> i'm a farmer. i grow really good habanero peppers. 15:53 < muurkha> that still puts your experience 90% of an implementation ahead of mine :) 15:54 < muurkha> ooh nice! what zone do you live in? 15:54 < muurkha> do you speak JS? 15:54 < juri_> waaaay too far north, but at least the apartment balcony faces north... 15:55 < juri_> (i'm from the south of the US, but now live in berlin) 15:55 < muurkha> I did an interval arithmetic thing in JS at http://canonical.org/~kragen/sw/aspmisc/intervalgraph 15:56 < muurkha> beware, its stupid parser associates subtraction and division wrong because I am dum 15:56 < juri_> :D 15:56 < muurkha> but hopefully the code should be understandable 15:56 < muurkha> of course it doesn't tweak the FPU rounding modes *either* 15:57 < juri_> you can do that, in haskell. :D 15:57 < muurkha> so it might get the wrong answer at times due to not widening the interval by the requisite rounding error 15:58 < muurkha> yes, you said :) too bad I wasn't using Haskell 16:00 < kanzure> why floating point? why not just giant integers? 16:00 < kanzure> bignum stuff 16:01 < muurkha> do you mean fixed-point, rationals, or what? 16:02 < muurkha> two lines with integer coordinates will have a rational intersection, but often the representation size of a rational grows exponentially with the number of operations that went into calculating it 16:02 < juri_> because floating point is everywhere? i want something that *runs*. 16:02 < muurkha> a line and a circle with integer coordinates will generally not have a rational intersection 16:03 < kanzure> if you have a fixed precision you care about then why not use a fixed point number 16:04 < muurkha> GHC does seem to have bignums built in (is that part of the Haskell definition?) but I'd think that for most slicer needs 64 bits of precision would be plenty 16:05 < juri_> I've tried building engines with rationals before. i found i was better off sticking to floating point, because the amount of CPU that leaks out is pretty extreme. 16:05 < juri_> (i have a Rational fork of ImplicitCAD) 16:06 < muurkha> yeah, exponentially growing numbers will do that ;) 16:06 < muurkha> I hear Berlin is nice 16:06 < juri_> muurkha: i've never been this at home anywhere. 16:07 < juri_> and i still barely speak the language. people like me aren't "meant" to be from the south. 16:08 < muurkha> nice :) 16:10 < muurkha> yeah, the US South has its virtues, but I don't enjoy its anti-intellectualism, machismo, and poverty 16:10 < muurkha> it does not represent my vision of a good future for humanity 16:13 < muurkha> kanzure: I don't think fixed point necessarily helps with the problem of roundoff error 16:29 < kanzure> juri_: bob@mcelrath.org says he knows a few tricks for you if you email him 16:33 < juri_> ooo. will do! 16:34 < muurkha> sweet 16:35 < muurkha> I'm interested to hear what you end up doing! 16:35 < juri_> pulling out my hair and screaming? oh wait, i'm doing that already. ;) 16:39 < muurkha> I meant in the code ;) 16:53 < juri_> my next trick is going to be trying to adapt my property test into something i can drop the output of into gnuplot. see if i can play with the curves. 16:57 -!- Gooberpatrol66 [~Gooberpat@user/gooberpatrol66] has joined #hplusroadmap 17:11 < L29Ah> juri_: is Rational strict? did you use Int or Integer-based Rational? 17:11 < muurkha> strict in the sense of non-lazy? 17:11 < L29Ah> muurkha: yes 17:12 < L29Ah> eager 17:26 -!- codaraxis__ [~codaraxis@user/codaraxis] has joined #hplusroadmap 17:30 -!- codaraxis___ [~codaraxis@user/codaraxis] has quit [Ping timeout: 246 seconds] 17:30 < juri_> its lazy, and i used integer based. 17:36 -!- codaraxis___ [~codaraxis@user/codaraxis] has joined #hplusroadmap 17:40 -!- codaraxis__ [~codaraxis@user/codaraxis] has quit [Ping timeout: 252 seconds] 17:41 -!- codaraxis___ [~codaraxis@user/codaraxis] has quit [Ping timeout: 252 seconds] 17:47 -!- gnawk [~gnawk@76-196-232-104.lightspeed.sndgca.sbcglobal.net] has quit [Remote host closed the connection] 18:28 -!- yashgaroth [~ffffffff@2601:5c4:c780:6aa0::1909] has quit [Quit: Leaving] 18:31 < docl> hmm. with each calculation you have a certain amount of noise from the floating point errors, which widens the circle... can you make a series of circles this way and see how much bigger each one is? 18:54 * L29Ah allows 20:33 -!- darsie [~darsie@84-113-55-200.cable.dynamic.surfer.at] has quit [Ping timeout: 265 seconds] 22:58 -!- sgiath [~sgiath@mail.sgiath.dev] has quit [] 22:58 -!- sgiath [~sgiath@mail.sgiath.dev] has joined #hplusroadmap --- Log closed Mon Oct 03 00:00:36 2022