3
More Unknowns than equations? Not a problem! Use Sparsity!
http://sms.cam.ac.uk/media/860
David Donoho (Stanford University)
Monday 17 March 2008, 17:0018:00
40
More Unknowns than equations? Not a problem! Use Sparsity!
http://sms.cam.ac.uk/media/860
http://rss.sms.cam.ac.uk/itunesimage/1394277.jpg
no

More Unknowns than equations? Not a problem! Use Sparsity!
ucs_sms_57_860
http://sms.cam.ac.uk/media/860
More Unknowns than equations? Not a problem! Use Sparsity!
David Donoho (Stanford University)
Monday 17 March 2008, 17:0018:00
Tue, 18 Mar 2008 17:10:05 +0000
David Donoho
Steve Greenham
Isaac Newton Institute
David Donoho
722aca0f030464273a1eb266123ec648
732fe07e4f1232a344b4cbcad1279396
68959bfa5b9b56acf91aceebef252969
feff4a2f00a2cd660229e498f0fb4ecb
51dcfde042c7566b7f42e17697b175c4
f6b51d9b0d434a42380300cc92d08f91
David Donoho (Stanford University)
Monday 17 March 2008, 17:0018:00
David Donoho (Stanford University)
Monday 17 March 2008, 17:0018:00
Cambridge University
4311
http://sms.cam.ac.uk/media/860
More Unknowns than equations? Not a problem! Use Sparsity!
David Donoho (Stanford University)
Monday 17 March 2008, 17:0018:00
Everything you were taught about underdetermined systems of linear equations is wrong...
Okay, that's too strong. But you have been taught things in undergraduate linear algebra which, if you are an engineer or scientist, may be holding you back. The main one is that if you have more unknowns than equations, you're lost. Don't believe it. At the moment there are many interesting problems in the information sciences where researchers are currently confounding expectations by turning linear algebra upside down:
(a) An standard magnetic resonance imaging device can now produce a clinicalquality image using a factor 8 less time than previously thought.
(b) A Fourier imaging system can observe just the lowest frequencies of a sparse nonnegative signal and perfectly reconstruct all the unmeasured high frequencies of the signal.
(c) a communications system can transmit a very weak signal perfectly in the presence of intermittent but arbitrarily powerful jamming.
Moreover, in each case the methods are convenient and computationally tractable.
Mathematically, what's going on is a recent explosion of interest in finding the sparsest solution to certain systems of underdetermined linear equations. This problem is known to be NPHard in general, and hence the problem sounds intractable.
Surprisingly, in some particular cases, it has been found that one can find the sparsest solution by l¹ minimization, which is a convex optimization problem and so tractable. Many researchers are now actively working to explain and exploit this phenomenon. It's responsible for the examples given above.
In my talk, I'll discuss that this curious behavior of l¹ minimization and connect with some pure mathematics  convex polytope theory and oriented matroid theory.
Ultimately, the pure math behind this phenomenon concerns some accessible but very surprising properties of random point clouds in very high dimensions: each point gets very neighborly!
I'll also explain the connection of this phenomenon to the Newton Institute's ongoing program "Statistical Theory and Methods for Complex, HighDimensional Data".
Original web seminar at: http://www.newton.ac.uk/programmes/SCH/seminars/031717001.html
In association with the Newton Statistical Theory and Methods for Complex, HighDimensional Data programme:
http://www.newton.ac.uk/programmes/SCH/
20120911T13:09:18+01:00
4311
860
true
4x3
false
no