A comprehensive resource for safe and responsible laser use
It works by putting a standard pen-type laser pointer between two cams. Cranking a handle turns the cams which bounce the laser pointer up/down and left/right to create projected patterns:
By using different cam shapes, different patterns can be projected:
Instructions and plans are available online, including Thingiverse 3D printing files.
Stanford noted “At this point I think it is unlikely I will continue the project. But if I did, here’s what I could do:” He then listed adding blinds to make discontinuous patterns, making the device motor driven, and adding a web service to make it easier to create new cam patterns.
From Evan Stanford’s Hackaday.io page, posted in mid-June 2017
Michael Reeves’ tongue-in-cheek narration states “…it’s really doing its job of lasering me in the eye which is the real innovation here. To my pleasant surprise I found that this machine also solved another of society's problems; the fact that you're not seeing little tiny dots in your vision all day long. I know where to go when I wanted to see little dots, now I can't focus on anything.”
The laser in the video looks substantially more powerful than the U.S. FDA limit of 5 milliwatts. (However, it can be difficult to estimate laser power from a video. For example, the camera may be more red-sensitive than human eyes which might explain why the beam seems so large and bright.)
Anyone doing this should be aware of the problem of laser pointers often being more powerful than the label states, and more powerful than the U.S. limit of 5 mW.
Fortunately for Reeves’ vision, the laser is mechanically aimed by two devices that move it left-right and up-down. This makes the aiming relatively slow and lagging the facial recognition, so the beam can be dodged much of the time. He moves to avoid the beam, and is hit in or very near to an eye about once every couple of seconds.
The screenshot below shows the camera (blue arrow) and a laser module mounted on two servos (yellow arrow).
As befits a student budget, the housing is an old pizza box. Reeves wrote the facial recognition and aiming program in C#, using Emgu CV, a .Net wrapper for the OpenCV computer vision library.
In about a day, the video received 80,000 views as well as being featured at tech blog The Verge.
From The Verge. Original YouTube video here.
UPDATED April 19 2017: Michael Reeves told C/Net “My eyes are fine. A lot of people seem concerned about that, which I admit is warranted. I used a 5 mW laser diode, and never had it in my vision for more than a fraction of a second."
LIDAR sensors on self-driving cars work by sending laser light — usually non-visible infrared beams — in order to detect objects’ shapes and distances. According to Jonathan Petit, there is a problem: “Anybody can go online and get access to this, buy it really quickly, and just assemble it, and there you go, you have a device that can spoof lidar.”
The LIDAR can be made to falsely perceive objects that do not exist, or to ignore objects that are actually present.
A simple attack would cause the self-driving car to run into another car or an object. A more sophisticated attack could cause the car to choose a different path. Petit says “[this] means that then the risk could be ‘I’m sending you to small street to stop you and rob you or steal the car.’”
The Business Insider article is unclear but it appears the $43 is for equipment in addition to the cost of the laser pointer. Also, although the article did not say, it may be that the laser pointer needs to emit infrared light instead of, or in addition to, visible light.
Petit is a post-doctoral researcher at the University of California, Berkeley.
From an article in Business Insider, posted December 15 2016. The detailed article also discusses many other non-laser techniques of hacking self-driving cars.