Posts

Showing posts with the label Artificial Intelligence

Global Optimization, and … Ants?

Image
“ Find the best solution ” — it’s one of those statements that can rapidly spiral into chaos depending on what exactly one means when they say “ best ”. For example, “ what is the largest number in this list? ” is easy, but “ what is the best movie of the year? ”, well, not so much, right? In math,  Global optimization  is what we call the the task of finding the best set of conditions to achieve the objective. It also happens to be one of the parts of an area called  nonlinear programming  (but that’s a different topic). It turns out that Swarm Intelligence — emergent behavior that arises from a few simple rules — is actually a pretty good way to come up with the solution. (•). It comes from the way ants search for food, and as algorithms go, it’s really quite straightforward 1. Each ant (agent) picks a random direction and heads out. 2. If it finds something, it tells its buddies, and starts bringing pieces back home. 3. If it doesn’t find anything, it ...

Deep Learning and Interpretability

Image
One of the confounding things about Deep Learning is that we really don’t grok why it works. Oh, fine, we do, kinda, understand the “how” —  Stochastic Gradient Descent ,  TensorFlow,   NVIDIA , whatever — but “why it does what it does” is, well, beyond us. When you look at a trained neural network, our human tendency to categorize things tends to come to the fore. We look at the model, with all it’s weights, and we look for some sort of order, a pattern that could explain things, and, inevitably, we find what we’re looking for. Or, well, we  kinda  find it. In many many cases, we spot neurons that seem to have very specific functions. For example, there is the infamous “ Jennifer Aniston neuron ” that fires whenever she shows up on the screen. On a more relevant note, we can identify clusters of neurons that are associated with very specific tasks ( identifying cats, for example ). The vast majority of the neurons, however, don’t actually seem to  do ...

Neural Networks and Self-Replication

Image
You know what   quines   are, right? They’re programs that print their own source code (the term comes from Douglas Hofstadter, who invented the term in   Godel, Escher, Bach: an Eternal Golden Braid   to describe self-replicating expressions such as:   “‘is a sentence fragment’ is a sentence fragment” . (•) The fascinating thing about quines, especially when it comes to programming, is that it is a (very primitive!) form of self-replication. Howard Chang and Hod Lipson have a paper out —  Neural Network Quine   — in which they have worked on bringing this self-replication to the world of neural networks. Well, to be precise, they’ve figured out a mechanism for a neural network to output it’s weights, by, effectively, cycling through prediction and optimization (they call this mechanism   regeneration ). Why would you want to do that? Well, for a couple of reasons 1. Repair : Knowing how one is put-together, and how one replicates, allows for fixi...

Hacking your vision, with Deep Learning…

Image
We know that deep learning is vulnerable to “adversarial” example, where you make changes that completely fool your AI. You know —  people are going to f**k with stop-signs and there will be crashes, death, and mayhem! .   The question is, are our brains susceptible to the same kind of stuff? SciFi has always assumed that the brain might be hackable, and, sadly, it looks like this might be the case in reality too. Follow the chain here 1. You can mess with machine learning by tweaking the example images just a little bit. As an example, a wee bit of noise (indistinguishable to the human eye) makes google think a panda is a gibbon… 2. These tweaked examples can often transfer across domains. Basically, you train your StopSignHack™ at home using your own neural-network, and then put it out there on a real stop-sign, causing chaos (and getting arrested. Why nobody ever talks about that part, I don’t know). 3. Therefore, it might be possible to transfer these hacks across to huma...

Intelligence, Agents, and #DeepLearning

Image
Part of the trouble with achieving AI is defining WhatTF “Intelligence” even is in the first place. Or, to put it differently, what it means to   be   intelligent. And yes, that was a little rhetorical trick there, but one done on purpose — I’m switching the focus from the definition of intelligence to the definition of things that are intelligent, aka “agents” or “beings”. And, that important, because, in the end, we deduce intelligence by actions taken, by asking questions like • Was that the correct choice? And, • Did the choice help in the survival of the agent? The last part above is critical, because   “intelligent agents act to survive … they act to avoid dispersion by the forces of their environment that either, by way of natural laws, point towards increasing entropy, or even actively conspire against them in a competition for scarce resources (e.g. predators) … [The] reason for the existence of intelligence is the need of complex agents to su...