Artifical Intelligence

Software 2.0 or how to train AI in 2018

Submitted by luxian on Tue, 06/12/2018 - 09:23

Yesterday I stumbled upon this talk from Andrej Karpathy which explains his vision regarding the current status of software engineering in his talk: TRAIN AI 2018 - Building the Software 2.0 Stack.

The talk is half an hour long and it's not technical at all. It explains how AI is now facing a tooling scarcity rather than a lack of algorithms. He says that at Tesla they try to automate the algorithm tweaking and because of that they are now spending more time developing tools and training data sets. This sounds similar to Google's approach to AI and ML.

Hopefully this approach will allow Tesla to provide a autonomous Autopilot faster.

Deep learning gallery: Quadcopter trail navigation

Submitted by luxian on Tue, 02/28/2017 - 18:57

Deep Learning Gallery

I have deeplearninggallery.com in my bookmarks for quite some time but I didn't have time to check until today. As you probably guessed by the domain name itself, it's a gallery with a lot of cool deep learning projects. It's worth checking it out.

To stir your curiosity is a cool project I saw today:

Quadcopter trail navigation in forest

Project goal was to make a drone that can follow a trail through the forest using only one camera (no 3D images). You can see the results in the video. They pretty much did it. If you want to read more, you will find more links in the video description.

 

 

 

Google's AI becomes aggresive in stressful situation

Submitted by luxian on Mon, 02/13/2017 - 21:07

Actually this is not something very new if you have read all the previously recommended links in the article that started this serie "Artificial Inteligence". This new article is proving that depending on environment (how abundant resources are, goal definition and the connections between AI agents) the behavior of AI can be either aggressive or highly cooperative.

The full story can be found in the article Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations. Following quotes are from it:

(...) the message is clear - put different AI systems in charge of competing interests in real-life situations, and it could be an all-out war if their objectives are not balanced against the overall goal of benefitting us humans above all else.

(...) the initial results show that, just because we build them, it doesn't mean robots and AI systems will automatically have our interests at heart.

 

Super intelligence might be further away than we think

Submitted by luxian on Thu, 12/29/2016 - 22:28

If you watched and read the links I posted in the Artificial Intelligence article you probably believe that Super Intelligence might become reality rather sooner than later. Today I found an article trying to provide arguments against this: The Singularity is Further Than it Appears.

A short list of arguments:

  • Super Intelligence will not create the next level of Super Intelligence very fast, because this is not a linear problem.
  • There is no incentive for companies to create human level intelligence at the moment.
  • There are some ethical reasons we have to face when we build a truly sentient AI: Can we turn it off? Would that be murder? Can we experiment on it? Does it deserve privacy? What if it starts asking for privacy? Or freedom? Or the right to vote?
  • Uploading our brains to a computer is still a far away dream. At the moment we are able to "simulate" a cat brain but it runs 600 times slower than the real one and it's missing a lot of features the real brain has - so the simulation is far for being complete. Even with this incomplete simulation, we have to wait until 2035 or 2040 to have the hardware we need to simulate a brain similar to human brain in size.

via HN

Articifial Intelligence

Submitted by luxian on Mon, 12/05/2016 - 20:08

Artificial Intelligence (AI) and how it might change the human kind is debatable topic nowadays. To understand it better you can check these links:

If you prefer reading, Tim Urban (Wait But Why) explains it in two articles: 

Or you can watch the TED talk that's sums it up: