--- Log opened Sun Mar 20 00:00:05 2016 00:17 -!- cluckj [~cluckj@pool-108-16-231-242.phlapa.fios.verizon.net] has quit [Ping timeout: 244 seconds] 00:47 -!- ArturShaik [~ArturShai@37.218.157.134] has joined ##hplusroadmap 01:19 -!- sandeepkr__ [~sandeep@111.235.64.4] has joined ##hplusroadmap 01:33 -!- irseeyou [~irseeyou@c-67-168-101-20.hsd1.wa.comcast.net] has left ##hplusroadmap [] 03:02 -!- T0BI [~ObitO@41.101.164.148] has quit [Ping timeout: 268 seconds] 03:02 -!- T0BI [~ObitO@41.101.164.127] has joined ##hplusroadmap 03:05 -!- fleshtheworld [~fleshthew@2602:306:cf0f:4c20:70ac:a15c:3676:f18] has quit [Read error: Connection reset by peer] 03:24 -!- mf1008 [~mf1008@unaffiliated/mf1008] has quit [Remote host closed the connection] 03:25 -!- mf1008 [~mf1008@unaffiliated/mf1008] has joined ##hplusroadmap 03:33 -!- T0BI [~ObitO@41.101.164.127] has quit [Ping timeout: 240 seconds] 03:35 -!- T0BI [~ObitO@41.101.160.137] has joined ##hplusroadmap 03:45 -!- Houshalter [~Houshalte@oh-71-50-63-56.dhcp.embarqhsd.net] has quit [Quit: Quit] 04:17 -!- Gurkenglas [Gurkenglas@dslb-188-106-113-006.188.106.pools.vodafone-ip.de] has joined ##hplusroadmap 04:27 -!- mabel [~mabel@unaffiliated/jacco] has quit [Ping timeout: 252 seconds] 05:08 -!- cluckj [~cluckj@pool-108-16-231-242.phlapa.fios.verizon.net] has joined ##hplusroadmap 05:20 < FourFire> > people who are concerned about "friendliness" are concerned about an always-overwhelmingly-powerful adversary and then wondering "wait maybe there's an equally powerful anti-adversary to overpower my by-definition-overwhelming adversary." which doesn't compute at all. 05:21 < FourFire> I have a slightly different perspective which doesn't sound as stupid 05:23 < FourFire> If, at the time self improving AGI software problem has been solved enough for it to drag itself the rest of the way, and one is given resources to play with, hunam level intelligence is no longer a hardware problem as well as a software problem, 05:23 < FourFire> then being made/running first is a substantial advantage 05:24 < FourFire> if, General Intelligences do tend to gravitate towards the behavioural attractor of getting control over the most power*resources 05:24 < FourFire> locally. 05:27 < FourFire> Then, given those two premises, it is quite important to Humanity (or independent, (non optimized towards aquire resources, elminate competition) intelligences in general) to make a defensive AGI before any AGI which is not defensive of unoptimized intelligences takes over the local resource pool (whether that be just the biosphere, whole of earth, our solar system or even the galaxy) 05:32 -!- CheckDavid [uid14990@gateway/web/irccloud.com/x-bcqycejuuqiqzquu] has joined ##hplusroadmap 05:37 -!- CheckDavid [uid14990@gateway/web/irccloud.com/x-bcqycejuuqiqzquu] has quit [] 05:38 -!- CheckDavid [uid14990@gateway/web/irccloud.com/x-ybixrwxmpmkdvose] has joined ##hplusroadmap 05:49 < FourFire> > I don't think you actually care about friendliness. You care about autonomous super-robots wiping out humanity. So don't make autonomous super-robots. 05:49 < FourFire> maaku, yeah, just force everyone to never do something which they might stand to benefit substantially economically, especially large companies 05:50 < FourFire> > And don't give me any bull about super AIs convincing meat robots to do random shit 05:51 < FourFire> Why not, if a task is broken up into small enough pieces, anything can be done without acertaining the purpose of it's constuent parts, and all you need to make to happen is money 05:53 < FourFire> > FourFire first time? 05:54 < FourFire> Nah, like tenth, or third, getting that drunk. 05:54 < FourFire> social pressure sucks, but hopefully I got something out of it, scoring bro points with my collegues, sunk cost fallacy... 05:58 < FourFire> kanzure, about AGI and "always-overwhelmingly-powerful", I just needs to be overpowering to then current day human descendent intelligences, there's plenty of room for variable levels of general ability above the limits of enhanced biological humans 06:30 -!- JayDugger [~jwdugger@108.19.186.58] has joined ##hplusroadmap 07:32 -!- Filosofem [~Jawmare@unaffiliated/jawmare] has quit [Ping timeout: 248 seconds] 07:35 -!- Jawmare [~Jawmare@unaffiliated/jawmare] has joined ##hplusroadmap 07:39 -!- AmbulatoryCortex [~Ambulator@173-31-155-69.client.mchsi.com] has joined ##hplusroadmap 08:01 -!- yashgaroth [~yashgarot@2602:306:35fa:d500:f5e0:f867:a11d:8d52] has joined ##hplusroadmap 08:06 -!- T0BI [~ObitO@41.101.160.137] has quit [Remote host closed the connection] 08:09 -!- T0BI [~ObitO@41.101.160.137] has joined ##hplusroadmap 08:15 -!- T0BI [~ObitO@41.101.160.137] has quit [Remote host closed the connection] 08:16 -!- T0BI [~ObitO@41.101.160.137] has joined ##hplusroadmap 08:22 -!- jollybard [~picou@166.62.244.33] has joined ##hplusroadmap 08:28 -!- AmbulatoryC0rtex [~Ambulator@173-31-155-69.client.mchsi.com] has joined ##hplusroadmap 08:32 -!- AmbulatoryCortex [~Ambulator@173-31-155-69.client.mchsi.com] has quit [Ping timeout: 240 seconds] 09:07 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 09:13 -!- T0BI [~ObitO@41.101.160.137] has quit [Ping timeout: 244 seconds] 10:01 < maaku> FourFire: there are so many assumptions wrapped up in that. assuming low variance hard takeoffs, assuming rigit goal sets, assuming AI has access to effectors, etc. 10:02 < maaku> FourFire: there is absolutely no economic justification for putting an AGI in a robot body 10:02 < maaku> I've asked repeatedly to many people in the MIRI sphere why they keep asserting this. It makes no sense. 10:02 < maaku> http://lesswrong.com/lw/ne1/alphago_versus_lee_sedol/d67f 10:04 < maaku> Companies _don't_ want to cede decision making over to error-prone proto-AGIs. In fact they are very strongly incentivised against doing so. (And I challenge you to find a big organization doing AI research that would even consider doing so.) 10:04 < maaku> > Why not, if a task is broken up into small enough pieces, anything can be done without acertaining the purpose of it's constuent parts, and all you need to make to happen is money 10:05 < maaku> Just keep the thing inside an organization with protocols, rules, training, and oversight. This is a redicuously solved problem. 10:05 < maaku> If you care about such things. I don't. 10:05 -!- nildicit_ [~nildicit@gateway/vpn/privateinternetaccess/nildicit] has quit [Ping timeout: 240 seconds] 10:06 < maaku> (New resolution: stop engaging with MIRI/FLI folk with the hope that you'd convince them to do something useful with their lives.) 10:07 < maaku> Hrm. Well I did convince kaj sotala to seek (and get) funding for concept formation research. I guess that was a win. 10:08 < FourFire> I'm not convinced, se: social engineering hacks, but then my not being convinced presumes that an AGI with faculties required to do social engineering (the goals/motivation are assumed due to it existing in a company which stands to benefit from automating $taskset) will be created for other purposes within a company 10:09 < FourFire> still, assuming that your protocols argument holds, you are still relying on a company being competent enough to not suffer from any systemic information warfare failure and not failing economically, so that, say a AGI system isn't sold off at some point in a liquidation of assets 10:09 < maaku> FourFire: don't put people with decision authority in a position to talk to the thing, and shut the thing down to investigate if it starts advocating for unexpected things 10:10 < maaku> But I'm glad you recognize that social engineering is not a natural outcome, but requires actual training in deception... 10:10 < FourFire> maaku, back to my claim that any task can be broke up 10:10 < FourFire> it doesn't matter, if the thing can talk to interns then you're screwed 10:10 < maaku> FourFire: so don't let it talk to interns 10:10 < maaku> Physical security is a thing. 10:11 < maaku> You know, badges. Security guards with guns. Locked rooms. Buildings within buildings. This is a solved problem. 10:11 -!- T0BI [~ObitO@41.101.160.137] has joined ##hplusroadmap 10:12 < FourFire> information security of the kind I presume is required to contain a not explicitly safe AI sounds implausible in a company that is intending to profit from researching general AI and selling it as a product 10:13 < FourFire> it's not a problem of incompetent humans getting through to the AI, it's a problem of the AI getting through to incompetent humans 10:13 < maaku> FourFire: my summary position is that this AGI exitential threat scenario requires a chain of reasoning with about 10 things that have to go wrong, in series. Put basic checks against each one and there's no reason we can't proceed at full steam. Laws of probability ensure that 10 failures to be acceptably unlikely. 10:14 < maaku> FourFire: show me a convincing business plan for selling AGI as a service. 10:14 < FourFire> and hell, fully competent humans if given the wrong information, can make it go wrong too, the thing just needs to be capable of corrupting any line of information flow in order to cause issues, again, if it isn't specifically designed to be human safe 10:14 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 240 seconds] 10:14 < maaku> FourFire: and I'll show you a much more profitable business plan for selling the _outputs_ of AGI only and keeping the AGI close at hand. 10:15 < FourFire> Sure, and that company needs to have hella good information security 10:15 < FourFire> beter than I anticipated being possible in an organization run by human beings 10:15 < maaku> FourFire: I've worked for the government. It is possible to do infosec right. 10:15 < FourFire> maybe an organization run by dumb automation, and owned by some humans... but then you need technicians to maintain and it's still fraught, inforsec wise 10:16 < FourFire> maaku, really, didn't prevent the snowden incident... 10:16 < maaku> Eh, this is an uninteresting conversation. If the superintelligence tiles the universe with computronium, mission fucking accomplished. 10:17 < FourFire> Fair enough I'm staying away from AI research besides, trying to make my DNA project happen 10:19 < maaku> I've only engaged in existential risk debate because I've seen so many smart people get lured into that honeytrap and do nothing important with their lives. :\ 10:19 < sh> Beats doing something destructive with their lives. 10:20 < sh> And the 'nothing important' thing has yet to be shown. 10:21 < maaku> sh: if they haven't done anything important yet, they've got a low prior for ever doing anything important 10:21 < sh> Not what I meant. 10:23 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 10:25 < maaku> FourFire: what do you intend to use the DNA project for? 10:25 < maaku> Also I assume you are aware of the DNA-printing project other people here are working on 10:27 < FourFire> maaku, prove (or disprove) the feasibility of a tool that turns protein design into a "throw a tonne of computation at it" problem using genetic algorithms to grade genes according to how the proteins they synthesize into perform. 10:28 < FourFire> if it works, I'll maybe evolve human histones from scratch within six years, and an intelligently designed DNA repair mechanism within thirty years. 10:28 < FourFire> oh yeah and if it works I'm open sourcing it and thorwing it at any synthetic bio person who shows the faintest interest in the subject. 10:29 < FourFire> It would be great if something I can do can reduce indefinite lifespan research time by some small percentage 10:29 < maaku> Sounds like an AI problem actually. But anyway what you're going to run into is that protein performance is not a graded landscape 10:30 < maaku> So you vary a gene. Does the variation get you closer or further to a solution? How you answer that question is key 10:31 < FourFire> Sure, then I just need to impliment algorithms for designspace search that other people I know are working on, that's not my concern right now. 10:32 < FourFire> concern right now is individually learnng and making the components for the initial genetic algorithm and tying them together with what I call, a "bucket of scripts". 10:45 -!- c0rw|zZz is now known as c0rw1n 10:55 -!- irseeyou [~irseeyou@c-67-168-101-20.hsd1.wa.comcast.net] has joined ##hplusroadmap 10:57 < maaku> I think I may be under emphasizing the difficulty of this problem, which I think is AGI complete. That's why I work on AGI. But I say this not to discourage you. 10:58 < maaku> Hopefully by the time you hit that problem I'll have a more general heuristic finder to guide your searches. 11:01 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 260 seconds] 11:04 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 11:07 < pasky> AGI-complete implies that humans are good at that, but are they? i don't think they are actually 11:07 < pasky> it might just be intrinsically hard to do 11:08 < pasky> or maybe you mean AGI-complete in that then the AGI will design and build new physical equipment to do that with real proteins instead of in silico ;) 11:11 < c0rw1n> if the AGI is that smart it'll do it in the most efficient way it finds, whicheither of simulation or empirically, or even some mix of both 11:15 -!- nildicit [~nildicit@gateway/vpn/privateinternetaccess/nildicit] has joined ##hplusroadmap 11:15 < maaku> pasky humans are shitty AGIs 11:16 < maaku> *general intelligences 11:17 < c0rw1n> ^ also that yes 11:18 < c0rw1n> maybe we simply don't have enough memory or ability to compute over large enough datasets or whatever to call the limitations of our kludgy wetware 11:19 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 268 seconds] 11:32 < maaku> pasky a better statement is that AGI complete means that humans are able to do it. Says nothing about whether humans are good (that implies architecture) 11:33 < maaku> I personally think humans are a horrible thinking architecture to emulate 11:34 < FourFire> maaku, I understand that the problem I've chosen for myself might well be impossible to resolve at various stages, but I'm pursuing it until I can acertain that myself, 11:36 < FourFire> also I'm Not going to go into AGI research because I think it's an attractor of "nerdy white dudes" and frankly, I'm not the sharpest lightbulb in the forest, so there's little point in my competing to find "the one AGI algorithm" blah blah, I'd prefer to do less impactful but still useful work on the side, which won't fall through if all current AGI research does. 11:37 < maaku> You misunderstand me. I think you're doing the right thing. Keep at it. 11:37 < FourFire> :D 11:37 -!- justanotheruser [~Justan@unaffiliated/justanotheruser] has quit [Read error: Connection reset by peer] 11:38 < maaku> I think you will trip, and when you do ask for help here instead of giving up. 11:38 < FourFire> That's why I'm here 11:39 -!- justanotheruser [~Justan@unaffiliated/justanotheruser] has joined ##hplusroadmap 11:42 < maaku> I AGI company I will eventually form will be chiefly concerned with solving problems like that. 11:43 < maaku> But of your friend has a heuristic that doesn't require general intelligence, that's quite a breakthrough. 11:45 < FourFire> he's got several, for specific kinds of design spaces 11:45 < FourFire> the main advantage is that he has them now, as opposed to in 15 years 11:46 < FourFire> when your AGI, if it actually is an AGI will instantly learn them as soon as it is given the task to do so. 11:47 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 11:48 < pasky> btw (an early pre-peer review version) of my paper... written in a 16-hour marathon on friday ;) http://pasky.or.cz/sps.pdf (deep learning on sentences) 11:48 -!- mikebones_ [62a5cc2a@gateway/web/freenode/ip.98.165.204.42] has joined ##hplusroadmap 12:10 -!- ArturShaik [~ArturShai@37.218.157.134] has quit [Ping timeout: 252 seconds] 12:11 -!- jollybard [~picou@166.62.244.33] has quit [Ping timeout: 260 seconds] 12:59 < maaku> AGI isn't magic .. Replace "instantly" with "months of supercomputer time" 13:02 < justanotheruser> Is there a company I can send CAD files to that will machine parts for me? 13:03 < justanotheruser> for low volume 13:05 -!- jaboja [~jaboja@aejd62.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 250 seconds] 13:27 -!- sandeepkr_ [~sandeep@111.235.64.4] has joined ##hplusroadmap 13:31 -!- sandeepkr__ [~sandeep@111.235.64.4] has quit [Ping timeout: 252 seconds] 13:37 -!- sandeepkr__ [~sandeep@111.235.64.4] has joined ##hplusroadmap 13:39 -!- sandeepkr__ [~sandeep@111.235.64.4] has quit [Read error: Connection reset by peer] 13:39 -!- sandeepkr [~sandeep@111.235.64.4] has joined ##hplusroadmap 13:41 -!- sandeepkr_ [~sandeep@111.235.64.4] has quit [Ping timeout: 244 seconds] 14:18 -!- Diablo-D3 [~diablo@exelion.net] has quit [Ping timeout: 246 seconds] 14:18 -!- jaboja [~jaboja@eml13.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 14:23 -!- Diablo-D3 [~diablo@exelion.net] has joined ##hplusroadmap 14:30 -!- PatrickRobotham [uid18270@gateway/web/irccloud.com/x-bpbhdcfmrgsszblb] has joined ##hplusroadmap 14:58 -!- jaboja [~jaboja@eml13.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 276 seconds] 15:00 < esmerelda> Is it possible to determine what regions of DNA chromatin will bind to? 15:00 < esmerelda> Or rather, regions that will generally always be exposed, even without methyl/acylation? 15:02 -!- jaboja [~jaboja@eml13.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 15:04 < FourFire> maaku, but currently finding these patterns takes only weeks, collecting simulation data and getting a human to look over it manually 15:04 < FourFire> when it is formatted in the right way 15:05 < FourFire> the sole benefit of me working on my approach is that I'm working on my goal directly, and that i get some years head start, maybe 15:05 < FourFire> if AGI stuff works and doesn't instantly game over, then great, my work isn't needed. 15:16 -!- sandeepkr [~sandeep@111.235.64.4] has quit [Ping timeout: 246 seconds] 15:23 -!- Orpheon [~Orpheon@213.200.193.129] has joined ##hplusroadmap 15:28 -!- fleshtheworld [~fleshthew@2602:306:cf0f:4c20:ac53:ecb4:fba8:850] has joined ##hplusroadmap 15:31 -!- mikebones_ [62a5cc2a@gateway/web/freenode/ip.98.165.204.42] has left ##hplusroadmap [] 15:55 -!- Houshalter [~Houshalte@oh-71-50-63-56.dhcp.embarqhsd.net] has joined ##hplusroadmap 16:02 -!- T0BI [~ObitO@41.101.160.137] has quit [] 16:40 -!- PatrickRobotham [uid18270@gateway/web/irccloud.com/x-bpbhdcfmrgsszblb] has quit [Quit: Connection closed for inactivity] 17:32 -!- Houshalter [~Houshalte@oh-71-50-63-56.dhcp.embarqhsd.net] has quit [Ping timeout: 268 seconds] 17:34 -!- jaboja [~jaboja@eml13.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 240 seconds] 17:40 -!- jaboja [~jaboja@eml13.neoplus.adsl.tpnet.pl] has joined ##hplusroadmap 18:13 -!- Houshalter [~Houshalte@oh-71-50-63-56.dhcp.embarqhsd.net] has joined ##hplusroadmap 18:22 -!- mabel [~mabel@unaffiliated/jacco] has joined ##hplusroadmap 18:26 -!- balrog [~balrog@unaffiliated/balrog] has quit [Quit: Bye] 18:32 -!- balrog [~balrog@unaffiliated/balrog] has joined ##hplusroadmap 18:42 -!- Filosofem [~Jawmare@unaffiliated/jawmare] has joined ##hplusroadmap 18:43 -!- Jawmare [~Jawmare@unaffiliated/jawmare] has quit [Ping timeout: 244 seconds] 18:44 -!- Filosofem is now known as Jawmare 18:49 -!- superkuh [~superkuh@unaffiliated/superkuh] has quit [Quit: somethings very wrong with my network right now.] 18:52 -!- superkuh [~superkuh@unaffiliated/superkuh] has joined ##hplusroadmap 19:03 < maaku> FourFire I was assuming you meant the time it takes to train a grad student to do that 19:14 -!- esmerelda [~andares@unaffiliated/jacco] has quit [Ping timeout: 250 seconds] 19:44 -!- Orpheon [~Orpheon@213.200.193.129] has quit [Read error: Connection reset by peer] 20:14 -!- AmbulatoryC0rtex [~Ambulator@173-31-155-69.client.mchsi.com] has quit [Quit: Leaving] 20:22 -!- jaboja [~jaboja@eml13.neoplus.adsl.tpnet.pl] has quit [Ping timeout: 250 seconds] 20:41 -!- c0rw1n is now known as c0rw|zZz 20:49 < maaku> pasky I look forward to reading it 21:35 -!- ArturShaik [~ArturShai@37.218.157.134] has joined ##hplusroadmap 21:55 < justanotheruser> freecad is great software, I'm really liking it 22:01 -!- CheckDavid [uid14990@gateway/web/irccloud.com/x-ybixrwxmpmkdvose] has quit [Quit: Connection closed for inactivity] 22:11 -!- _hanhart [~hanhart@static.101.25.4.46.clients.your-server.de] has joined ##hplusroadmap 22:33 -!- nildicit_ [~nildicit@gateway/vpn/privateinternetaccess/nildicit] has joined ##hplusroadmap 22:35 -!- nildicit [~nildicit@gateway/vpn/privateinternetaccess/nildicit] has quit [Ping timeout: 244 seconds] 22:48 -!- yashgaroth [~yashgarot@2602:306:35fa:d500:f5e0:f867:a11d:8d52] has quit [Quit: Leaving] 22:58 -!- nildicit_ [~nildicit@gateway/vpn/privateinternetaccess/nildicit] has quit [Ping timeout: 252 seconds] 22:59 -!- sandeepkr [~sandeep@111.235.64.4] has joined ##hplusroadmap 22:59 -!- sandeepkr [~sandeep@111.235.64.4] has quit [Read error: Connection reset by peer] 23:00 -!- sandeepkr [~sandeep@111.235.64.4] has joined ##hplusroadmap 23:02 -!- Houshalter [~Houshalte@oh-71-50-63-56.dhcp.embarqhsd.net] has quit [Ping timeout: 240 seconds] 23:16 < FourFire> justanotheruser, it's got loads of functionality but it's interface leaves much to be desired 23:18 -!- drewbot [~cinch@ec2-54-91-25-39.compute-1.amazonaws.com] has quit [Remote host closed the connection] 23:19 -!- drewbot [~cinch@ec2-54-161-181-25.compute-1.amazonaws.com] has joined ##hplusroadmap 23:29 -!- Houshalter [~Houshalte@oh-71-50-63-56.dhcp.embarqhsd.net] has joined ##hplusroadmap 23:45 -!- _hanhart [~hanhart@static.101.25.4.46.clients.your-server.de] has quit [Remote host closed the connection] 23:46 -!- _hanhart [~hanhart@static.101.25.4.46.clients.your-server.de] has joined ##hplusroadmap --- Log closed Mon Mar 21 00:00:06 2016