2011-07-10.log

--- Log opened Sun Jul 10 10:49:36 2011
-!- gnusha [~gnusha@131.252.130.248] has joined ##hplusroadmap10:49
-!- Topic for ##hplusroadmap: http://gnusha.org/skdb/ http://groups.google.com/group/diybio http://bit.ly/diybionews http://gitduino.com/ http://gadaprize.org/ | logs: http://gnusha.org/logs/10:49
-!- Topic set by kanzure [~kanzure@131.252.130.248] [Thu Jan 20 10:44:20 2011]10:49
[Users ##hplusroadmap]10:49
[ AlonzoTG] [ devrandom ] [ Fiohnel ] [ jrayhawk ] [ saurik ] 10:49
[ archels ] [ drazak ] [ flamoot ] [ kanzure ] [ seanstickle] 10:49
[ augur ] [ elmom ] [ foucist ] [ mjr ] [ streety ] 10:49
[ bkero ] [ epitron ] [ gnusha ] [ nchaimov ] [ superkuh ] 10:49
[ CapNemo ] [ eridu ] [ Helleshin ] [ nuba ] [ uniqanomaly] 10:49
[ CIA-18 ] [ fenn ] [ JaredWigmore] [ pasky ] [ Utopiah ] 10:49
[ dbolser ] [ ferrouswheel] [ jmil ] [ PixelScum] [ ybit ] 10:49
-!- Irssi: ##hplusroadmap: Total of 35 nicks [0 ops, 0 halfops, 0 voices, 35 normal]10:49
-!- Channel ##hplusroadmap created Thu Feb 25 23:40:30 201010:49
-!- Irssi: Join to ##hplusroadmap was synced in 5 secs10:49
-!- Fiohnel [~r3idslash@111.94.200.53] has quit [Read error: Connection reset by peer]10:52
-!- Fiohnel [~r3idslash@111.94.200.53] has joined ##hplusroadmap10:53
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has joined ##hplusroadmap12:25
-!- flamoot [42f18c6f@gateway/web/freenode/ip.66.241.140.111] has quit [Quit: Page closed]12:50
ybitwhat's the process of blocking friends who spam on google+ like they do everywhere else?13:02
ybiti'd hate to block them from messaging me because they are friends, their stream just sucks13:03
ybitoooh13:03
ybiti see13:03
ybithurpa durp13:04
-!- lumos [~lumos@afbu181.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap13:09
ybitheathmatlock@gmail.com add me on google+ or spam me i don't care13:21
-!- lumos [~lumos@afbu181.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 255 seconds]13:22
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap13:35
ybithttp://www.youtube.com/watch?v=EC5sbdvnvQM13:39
ybit"1966 prediction of the home computer "13:40
uniqanomalyis there some prediction of year when 100% humanity will be rational?14:04
Utopiah+oo ?14:04
uniqanomaly;]14:04
mjrnah, humanity will be destroyed before +oo14:05
Utopiahby itself?14:05
seanstickleBy aliens14:07
seanstickleOr super-intelligent cats14:07
mjrsuper-intelligent cat aliens14:08
seanstickleThat is the 3rd option, yes14:08
seanstickleWe can refer to them as SICA14:09
Utopiahsounds scientific enough14:09
uniqanomalyI have question: can superstition minded religious people and more rational agnostics equally be called "humans"14:10
seanstickleYes14:10
seanstickleNext question14:10
Utopiahuniqanomaly: you might like #lesswrong14:10
uniqanomalyi mean shouldn't be there different means to diversify14:11
uniqanomalyUtopiah: I already do14:11
uniqanomalyright14:11
kanzure#lesswrong is banned in here14:15
uniqanomalykanzure: you mean like talking doesn't get anything done?14:16
kanzureand various other reasons14:17
-!- uniqanomaly [~ua@dynamic-78-8-91-216.ssp.dialog.net.pl] has quit [Quit: uniqanomaly]14:19
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has quit [Quit: nchaimov]14:19
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has joined ##hplusroadmap14:21
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has left ##hplusroadmap []14:22
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has joined ##hplusroadmap14:22
jrayhawkkanzure do you have a list of entities and topics of discussion that infuriate you somewhere14:29
jrayhawki ask for completely legitimate reasons and not just for fodder to troll you with14:30
kanzurekeeping a file like that doesn't sound healthy14:32
kanzure'angerfile'14:32
jrayhawkah, so you're saying self-quantification and analysis is unhealthy. interesting. perhaps i can interest you in this website, lesswrong.org?14:33
kanzurewhy not a happyfile instead14:33
kanzurei'm sorry but i've sort of given up against lesswrong, i don't know what to do to stop them14:33
jrayhawkwould it be filled with dancing kirbys14:33
mjrI think it's .com though14:33
mjr(which is too bad if the US some day decide to get someone extradited over it)14:34
jrayhawkand also is subject to Verisign fuckery like SiteFinder14:34
seanstickle"stop" lesswrong? Are they actually doing anything beyond writing articles that get submitted to HN?14:35
kanzureseanstickle: http://siai.org/14:35
kanzurewhat i fail14:36
kanzurefail fail fail14:36
kanzurei clearly meant singinst.org14:36
seanstickleThat seems to be pretty much the same thing14:36
kanzureno14:36
seanstickleExcept they all gather together to talk about the articles14:36
kanzurelesswrong claims to just be about educating people about rationality14:36
kanzurebut in reality they are made up of singularitarians trying to rationalize their rationalizations about risk rationalization or something dangerous to that effect14:37
jrayhawknot quite14:37
kanzureyeah i botched that14:37
seansticklewha?14:37
kanzureif i was more eloquent about this it wouldn't be as much of a problem14:37
jrayhawkheehee14:38
seanstickleAren't they just Raelians, except with HAL standing in for the Space Aliens?14:38
kanzureeverything's religious with you.. :p14:38
kanzurewell14:39
kanzureno, based on my experience they are more sophisticated14:39
seansticklereligion is just a manifestation of a tribal culture14:39
kanzurei am not interested in hearing your views on religion (sorry)14:39
seanstickleSo?14:39
seanstickleThen don't bring it up14:39
seanstickleEasy enough14:39
jrayhawkso, SIAI has a problem in that hard AI takeoff is such an intractible problem that they lack good means of thinking about it, so they're hoping to generate more mindshare for the problem using a sort of rationalist religion to recruit people.14:39
kanzurewell boiling everything down to religion doesn't make for much of a conversation14:39
kanzurejrayhawk: there's a lot of pieces missing in there.. like,14:40
kanzurewell i guess "rationalist religion" covers the fearmongering aspects and FUD14:40
seanstickleI'm not sure "trying to rationalize their rationalizations about risk rationalization" leads to much of a better conversation14:40
kanzurebut their assumptions on actors/agents/decision theory/representation seems a little off somewhere14:41
kanzurewhcih again i guess can be grouped under religion14:41
kanzureok nevermind jrayhawk14:41
jrayhawkWell, the religious part is that they think the sort of 'rationalist singularity' they're trying to jumpstart will be revolutionary and amazing14:42
seansticklerapture of the geeks indeed14:42
kanzureheh their version of the precautionary principle is sorta extreme.. it's a proactionary-precautionary principle ("kill all the other AIs that are emerging")14:43
jrayhawk(if you get a bunch of rationalists together discussing rationalism, they will be able to make themselves more rational faster and better, etc.)14:43
seansticklegenocide of the AIs be damned, apparently14:43
jrayhawkso, in a way, they're trying to combat AI singularity with human rationalist singularity, and that just seems dumb to me considering that rationalism isn't really all that empowering.14:44
seanstickleOh, the manichean dialectic is powerful with this one14:44
kanzurejrayhawk: i feel that saying "they are just promoting rationality, and not actively doing XYZ" is somewhat dishonest.. because most of them, given some rationalization, *will* follow through with it and convince themselves that "oh, therefore we should make an ai that kills everything that's a potential threat to its global dominance" (etc.)14:44
mjrthat's rather disingenuous (or merely dumb)14:46
kanzurenop it's "rational"14:46
kanzure(okay maybe that's unfair :))14:47
mjrI was talking about your strawman14:47
jrayhawkOr, I guess I should say, empowerement from rationalism has diminishing returns14:47
seanstickleI like how all the visiting fellows are white guys, with the exception of 2 women and 1 asian.14:47
seanstickleCute.14:47
jrayhawkAnd the places where the highest returns are being made are the places lesswrong is specifically unbelievably hostile towards.14:48
kanzuremjr: i'm p. sure a particular person has published a lot on that scheme14:48
kanzurejrayhawk: elaborate?14:48
kanzurewhat is lesswrong hostile to, anyway?14:48
mjrI'm pretty sure you're (possibly willfully) misreading said publications14:49
jrayhawkIrrationalist religion.14:49
kanzuremjr: that's possible, but i've spent a lot of time talking with these friends and i'm not convinced i'm misreading14:49
-!- QuantumG [~qg@rtfm.insomnia.org] has joined ##hplusroadmap14:50
mjr*shrug* "killing" is a blunt instrument that you just presume to be the one to use on possible competitor singleton-wannabes14:51
kanzureokay, retarding them14:51
kanzuremight as well kill 'em14:51
mjrthere is sandboxing, if it can be shown to be effective in that scope (if not, well, we're screwed anyway *shrug*)14:52
kanzureyeah i wouldn't let you sandbox me..14:52
mjrrather irrelevant if you would or wouldn't14:53
seanstickleEthical issues involved in killing AIs are not yet settled14:53
mjryou'll be sandboxed too, or we're screwed ;)14:53
seanstickleBut I imagine there will be some14:53
kanzure"my gun is bigger than yours! nuh-uh! times 1000! times infinity!"14:53
mjrseanstickle, yeah. Meanwhile, AI's are especially easy not to kill, merely chuck away somewhere until they can be run safely14:54
kanzureit doesn't matter if you are talking about ai.. might as well be talking about uploads14:54
seansticklemjr: I don't see that we have AIs yet, so I have no idea how easy it is to just chuck them away14:54
mjrof course, but now we were talking about AIs14:54
mjrseanstickle, ...14:54
kanzurewell you can shoot someone and they die14:55
mjrThey're data.14:55
kanzure(they're running on biology and nucleic acids)14:55
kanzureso there's your start.14:55
seansticklemjr: you're data too14:55
mjrseanstickle, if you mean that you'll first have to _stop_ them running (along with any failsafes), yeah, that's the harder part, but it was pretty much presumed that there was the ability to kill. Ability to store while you're at it, well, okay, it's slightly harder in some circumstances, but a lot easier than with humans.14:56
kanzurejrayhawk: i'm a little surprised "proactionary principle" isn't mentioned on lesswrong14:56
kanzureaw crud14:57
kanzureand the only mention of 'precautionary principle' is bgoertzel's article14:57
seansticklemjr: I have no idea how hard it is, and since we don't have any AIs yet, I'm not sure anyone else knows how hard it is either14:57
mjr...14:57
mjrokay, be that way14:57
kanzurehttp://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_institutes_scary/14:57
jrayhawkhaha14:57
kanzurehttp://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html14:57
seansticklemjr: submarines aren't fish14:58
mjrcomputer programs are computer programs14:58
seanstickleAI != computer program14:58
seanstickleAt least, this has not yet been demonstrated to be the case14:58
mjrtrue, because not all computer programs are AI14:58
seanstickleI don't know how to type a subset symbol14:59
seanstickleBut you know what I mean14:59
kanzure\subset{}14:59
seanstickleHa14:59
kanzurejrayhawk: "provably non-dangerous AGI" is exactly the signs of a precautionary principle15:00
kanzure"SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that:"15:00
kanzure"progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race."15:01
mjrFine, if you do an AI using squishy goo, it'll also be somewhat more harder to store than to kill. Here in the relevant world, we're talking mostly software, possibly some custom hardware but turingy anyway15:01
kanzure"But SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk."15:02
seansticklemjr: we may be talking about software now, but there's no convincing evidence that a super-intelligent AI is possible with just mostly software.15:03
kanzure"provably non-dangerous AGI 'is highly likely to lead to an involuntary end for the human race'" is sort of cheating, since they can claim "well it's only a 99.9999% chance"15:03
kanzurein reality i do think that they are some who think it's a 100% chance15:03
seansticklemjr: although, to be fair, there's no evidence that it's not possible either15:03
mjrit's trivially possible via church-turing thesis15:04
kanzurei feel that my issues with lesswrong are mostly because of siai.. the rationalist religion stuff is just funny and wouldn't be of concern otherwise15:04
seansticklemjr: really?15:04
jrayhawkDo you object to the Scary Idea, or what they're doing about the Scary Idea?15:04
mjr(though said thesis is not formally provable, it's strength as a null hypothesis takes a lot to budge it)15:05
kanzurejrayhawk: yes15:06
kanzurei don't object to rationalist development but there is too much sickness in that particular community15:06
jrayhawkDo you object to the Scary Idea because you think it's implausible, or because you're a misanthrope?15:06
kanzurefalse dichotomy?15:07
jrayhawkI mean the "unacceptably high probability of human extinction" being implausible15:07
kanzurethere's a number of reasons and i don't think i have one single biggest one15:07
kanzurebut15:07
mjrapropos I don't object to the scary idea, but don't really find it scary because I'm a misanthrope ;)15:07
jrayhawkYeah, I'm a misanthrope, too.15:08
kanzure"provably non-dangerous GI" is something i strongly object to15:08
kanzuredidn't someone do an existence proof of provably-non-dangerous-GI that totally failed?15:08
mjrum, does that tell us something?15:08
jrayhawkI don't see how that would be evidence of anything, yeah.15:08
kanzurewell it's evidence of a lack of foundation15:09
mjrAnyway, sure provably non-dangerous GI may turn out to be a pipe dream. Good that there are people smoking that pipe, though, 'cause if so, we're toast ;)15:09
kanzureincluding their inabilities to define intelligence in anything other than AIXI or whatever15:09
jrayhawkUh, risks are always speculative. That's what makes them risks.15:10
jrayhawkThey're saying "we don't know enough about this and it could kill us", and you appear to be saying "they don't know enough to say they don't know enough"?15:10
kanzureso why would i waste my risk tolerance or risk points on a precautionist stance15:11
jrayhawkSo it seems to be more what they're doing about the scary idea that you object to, then.15:12
kanzurereally?15:13
kanzurei'd definitely say that there's some probability between 0 and 1 non-inclusive that any ai could totally eliminate humanity.. sure15:13
mjrand I'd say you just seem to grossly underestimate said risk, but I'll just presume you've read enough of the relevant material to not bother to try recalibrating you15:15
kanzuremjr: maybe it would be helpful for you if i just assume that ai will destroy everything, and continue this discussion on that premise15:18
jrayhawkThe more interesting part is that a transhumanism singularity is far more likely to result in a sort of "humanity" that has some prayer of managing and surviving an AI takeoff as it happens than a rationalism singularity.15:19
jrayhawkSo what Kanzure does every day is already a good risk-management practice.15:19
mjrnah, I'm going to be so helpful that I'm gonna fall asleep soon, thereby ending this waste of time for now15:19
kanzurejrayhawk: where "a rationalism singularity" would be something abuot making people more aware of risks and decisions that may or may not cause an ai to launch paperclips15:19
kanzure?15:19
fenni wonder if a hardware project would qualify (i.e. pipetbot)15:19
fennfor the diybio grant15:19
kanzurefenn: i'm still trying to figure out if it's "business-only"15:20
kanzurefenn: did you see L001?15:20
kanzureer, LH00115:20
QuantumGI still haven't seen anyone with a hope in hell of making a human-level intelligence15:20
kanzureQuantumG: your mom and dad15:20
jrayhawkhaha15:20
kanzureunless you're just a .. oh my god15:20
QuantumGyou haven't met my brothers15:20
kanzurei've met jrayhawk's brothers O_o15:21
jrayhawkBoth of them?15:21
kanzurewell15:22
-!- seanstickle [~seanstick@c-98-218-2-48.hsd1.dc.comcast.net] has left ##hplusroadmap []15:22
kanzurethere's still this non-neglible chance that you are all the same person15:22
kanzurebut no i've only met steve i guess15:22
jrayhawkcunning disguises15:22
kanzuremaster of them?15:22
QuantumGin any case, I think people who make cakes from a box mix have a greater claim to have "made" that cake than most parents do to their children.15:23
kanzurei can't stop listening to http://www.youtube.com/watch?v=9HSj-2shbqY :(15:23
kanzurebbl.. bbq15:23
jrayhawka "rationalism singularity" being one where getting smarter to help eachother get smarter results in in smarter people faster (even though rationalism is more the art of doing 'less wrong', which, again, involves diminishing returns)15:23
kanzurejrayhawk: oh there's also all the control-of-control-of-control stuff i forgot to consolidate here15:25
kanzurejrayhawk: "resulting in smarter people".. so.. at least some aspect of transhumanism?15:25
jrayhawkIt's not really transformative, though, it's just making the best of what we've got.15:26
jrayhawkWhile I'm sure transhuman elements will be drawn into it, they obviously aren't the primary focus.15:27
kanzurebbl for realz15:27
QuantumGand then there's this http://www.youtube.com/watch?v=AUQG9PhDNnk15:30
jrayhawkthis guy says a lot of words15:34
jrayhawki wish he would say fewer15:34
-!- flamoot [42f18c6f@gateway/web/freenode/ip.66.241.140.111] has joined ##hplusroadmap15:39
flamootanyone15:40
jrayhawkhello15:40
QuantumGhttp://www.youtube.com/watch?v=4VBPM67ddY0 is gold too15:45
-!- nsh [~nsh@wikipedia/nsh] has joined ##hplusroadmap15:45
fennmmmm.. sandwich clamps *drool*15:51
fennyou can use a CD-ROM laser  for the pulsed laser microfluidics pump15:52
fennso basically free15:52
fennalso that's interesting for DIY bubble-jet perhaps15:53
nshsandwich clamps?15:56
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has quit [Quit: Leaving]16:13
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap16:50
-!- eridu [~eridu@gateway/tor-sasl/eridu] has quit [Ping timeout: 250 seconds]16:55
-!- dbolser [~dmb@bioinformatics.org] has quit [Ping timeout: 250 seconds]16:59
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has quit [Quit: Leaving]17:07
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap17:08
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has quit [Quit: Computer has gone to sleep.]17:24
fenn 17:26
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has joined ##hplusroadmap17:26
fenni can make a business around pipetbot17:27
fennmight have to redesign it from scratch though; ultimaker might not be happy if i used their files for commercial gain17:27
fennlets make a blog called "more right"17:30
fennmistakes are the best way to learn17:30
-!- eridu [~eridu@gateway/tor-sasl/eridu] has joined ##hplusroadmap17:53
-!- flamoot [42f18c6f@gateway/web/freenode/ip.66.241.140.111] has quit [Quit: Page closed]18:07
-!- eudoxia [~eudoxia@r190-135-39-218.dialup.adsl.anteldata.net.uy] has joined ##hplusroadmap18:13
-!- eudoxia [~eudoxia@r190-135-39-218.dialup.adsl.anteldata.net.uy] has quit [Ping timeout: 252 seconds]18:22
-!- JayDugger [~duggerj@pool-173-74-79-43.dllstx.fios.verizon.net] has joined ##hplusroadmap18:30
fenn"LH001" is a poor name choice18:43
fennalso, if that was in reply to me talking about pipetbot, they do different things18:44
-!- JayDugger [~duggerj@pool-173-74-79-43.dllstx.fios.verizon.net] has left ##hplusroadmap ["Leaving."]18:50
nshwho are you talking to?18:59
QuantumGfenn don't need anyone else to talk "to"19:00
-!- lumos [~lumos@adqb198.neoplus.adsl.tpnet.pl] has quit [Read error: Connection reset by peer]19:09
-!- klafka [~textual@cpe-69-205-70-55.rochester.res.rr.com] has quit [Quit: Computer has gone to sleep.]19:17
nshright19:25
foucistkanzure: so are you a startuper19:30
nshjust an upstart19:31
-!- eridu [~eridu@gateway/tor-sasl/eridu] has quit [Ping timeout: 250 seconds]19:41
-!- jmil [~jmil@2001:468:1802:e148:223:32ff:feb1:9dfc] has quit [Quit: jmil]20:15
augurhappy tesla day! :D20:28
kanzurefoucist: i have a few projects :p20:32
kanzureand i've been around the block i guess..20:32
kanzurefenn: what's the difference between liquid handling and liquid pipetting?20:33
foucistaugur: let's commemorate tesla day by pumping electricity into the ground and using that to power all sorts of devices all over the world!20:37
kanzurelet's use the earth to electrocute the moon20:38
foucist?20:38
kanzureyou have to think like tesla: grand scale shit20:38
augurGRAND SCALE YEAH20:39
augurFUCK YOU, MOON20:39
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has quit [Read error: Connection reset by peer]20:48
-!- nchaimov [~nchaimov@c-24-20-202-138.hsd1.or.comcast.net] has joined ##hplusroadmap20:49
-!- nsh [~nsh@wikipedia/nsh] has quit [Ping timeout: 260 seconds]21:05
-!- jmil [~jmil@c-68-81-252-40.hsd1.pa.comcast.net] has joined ##hplusroadmap21:11
-!- mayko [~mayko@71-22-217-151.gar.clearwire-wmx.net] has joined ##hplusroadmap21:12
kanzurefenn: well, joe delivered on alex kiselev getting the money22:16
kanzureso when he says he can get this he's probably right.. but likely not on his idea22:17
kanzureit'll be easy to line up aubrey to "advise" or whatever to throw some names on board.. of.. uh, whatever it is22:17
kanzurei don't see why a pipetbot would be of particular interest though?22:17
kanzurelike if it's straight business, the obvious thing to do would be to copy some old business that already works22:18
kanzureif it's something that's supposed to be achievable, useful, not-necessarily-turned-into-a-company-or-else, i think you or even i would have much better ideas?22:19
kanzureheh now i am getting ads for "ePCR" or "electronic patient care reporting"22:39
-!- mayko [~mayko@71-22-217-151.gar.clearwire-wmx.net] has quit [Ping timeout: 240 seconds]22:40
kanzurewhy would they choose the same acronym22:40
-!- BaldimerBrandybo [~PixelScum@ip98-177-175-88.ph.ph.cox.net] has joined ##hplusroadmap22:47
-!- PixelScum [~PixelScum@ip98-177-175-88.ph.ph.cox.net] has quit [Ping timeout: 276 seconds]22:51
-!- jmil [~jmil@c-68-81-252-40.hsd1.pa.comcast.net] has quit [Quit: jmil]23:18
--- Log closed Mon Jul 11 00:00:37 2011

Generated by irclog2html.py 2.15.0.dev0 by Marius Gedminas - find it at mg.pov.lt!