All the cool projects now can connect to a computer or phone for control, right? But it is a pain to create an app to run on different platforms to talk to your project. [Kevin Darrah] says no and shows how you can use Google Chrome to do the dirty work. He takes a garden-variety Arduino and a cheap Bluetooth interface board and then controls it from Chrome. You can see the video below.
The HM-10 board is cheap and could connect to nearly anything. The control application uses Processing, which is the software the Arduino system derives from. So how do you get to Chrome from Processing? Easy. The p5.js library allows Processing to work from within Chrome. There’s also a Bluetooth BLE library for P5.
Once you know about those libraries, you can probably figure the rest out. But [Kevin] shows a nice example that you could easily replicate. The Arduino and Bluetooth code aren’t very hard to follow. The Processing program looks a lot like an Arduino program with a setup and loop function, but it also has canvases, buttons, and other things you don’t usually have in an Arduino.
In November 2017, we showed you [Chris Annin]’s open-source 6-DOF robot arm. Since then he’s been improving the arm and making it more accessible for anyone who doesn’t get to play with industrial robots all day at work. The biggest improvement is that AR2 had a closed-loop control system, and AR3 is open-loop. If something bumps the arm or it crashes, the bot will recover its previous position automatically. It also auto-calibrates itself using limit switches.
AR3 is designed to be milled from aluminium or entirely 3D printed. The motors and encoders are controlled with a Teensy 3.5, while an Arduino Mega handles I/O, the grippers, and the servos. In the demo video after the break, [Chris] shows off AR3’s impressive control after a brief robotic ballet in which two AR3s move in hypnotizing unison.
[Chris] set up a site with the code, his control software, and all the STL files. He also has tutorial videos for programming and calibrating, and wrote an extremely detailed assembly manual. Between the site and the community already in place from AR2, anyone with enough time, money and determination could probably build one. Check out [Chris]’ playlist of AR2 builds — people are using them for photography, welding, and serving ice cream. Did you build an AR2? The good news is that AR3 is completely backward-compatible.
The AR3’s grippers work well, as you’ll see in the video. If you need a softer touch, try emulating an octopus tentacle.
Thanks for the tip, [Andrew]!
In the distant past, engineers used exotic devices to measure orientation, such as large mechanical gyros and mercury tilt switches. These are all still useful methods, but for many applications MEMS motions devices have become the gold standard. When [g199] set out to build their Balance Box game, it was no exception.
The game consists of a plastic box, upon which a spirit level is fitted, along with a series of LEDs. The aim of the game is to keep the box level while carrying it to a set goal. Inside, an Arduino Uno monitors the output of a MPU 6050, a combined accelerometer and gyroscope chip. If the Arduino detects the box is tilting, it warns the user with the LEDs. Tilt it too far, and a life is lost. When all three lives are gone, the game is over.
It’s a cheap and simple build that would have been inordinately more expensive only 10 to 20 years ago. It goes to show the applications enabled by ubiquitous cheap electronics like MEMS sensors. The technology has other fun applications, too – for example the Stecchino game, or this giant balance board joystick. We’re certainly lucky to have such powerful technology at our fingertips!
The Thing is an unassuming name for an ambitious project to build an FPGA board from easy to find components.
The project stems from an earlier build submitted to the 2018 Hackaday Prize by [Just4Fun] where two dev boards – an STM32-based Arduino and an Altera MAX II CPLD board – were combined with the Arduino used as a stimulus generator for the CPLD. This way, the Arduino IDE, interfaced through USB, can be used for programming the CPLD.
The Thing similarly uses the STM32 Arduino as a companion processor for the FPGA, with a 512KB SRAM and common I/O for GPIOs and a PS/2 keyboard for running HDL SOCs. It can also run Multicomp VHDL SOCs, a modular design that was made to run some older 8-bit CPUs made by [Grant Searle].
The FPGA (EP2C5T144C8N) uses the Quartus II IDE for configuration with a USB Blaster dongle through the JTAG or AS connector. The FPGA side controls a 4 digit seven segment LED display, four push buttons, 3 LEDs, a push button to clear all internal FFs (sampling rates), a push button to force a reboot (configuration reload), and a switch to force all pins to Hi-Z mode. Both an onboard 50MHz oscillator and connector for an external oscillator are also present on the FPGA side.
In one demo of the MP/M system capability of the board, The Thing was made to handle four concurrent users with one serial port connector to a PC and terminal emulator and the other serial ports connected to terminal emulators on VT100 boards routed through a dual-channel RS232 adapter board.
Both the Arduino and FPGA sides can also be used as standalone boards, but why use one when you can harness both boards together?The HackadayPrize2019 is Sponsored by:
As somebody who loves technology and wildlife and also needs to develop an old farmhouse, going down the bat detector rabbit hole was a journey hard to resist. Bats are ideal animals for hackers to monitor as they emit ultrasonic frequencies from their mouths and noses to communicate with each other, detect their prey and navigate their way around obstacles such as trees — all done in pitch black darkness. On a slight downside, many species just love to make their homes in derelict buildings and, being protected here in the EU, developers need to make a rigorous survey to ensure as best as possible that there are no bats roosting in the site.Perfect habitat for bats.
Obviously, the authorities require a professional independent survey, but there’s still plenty of opportunity for hacker participation by performing a ‘pre-survey’. Finding bat roosts with DIY detectors will tell us immediately if there is a problem, and give us a head start on rethinking our plans.
As can be expected, bat detectors come in all shapes and sizes, using various electrickery techniques to make them cheaper to build or easier to use. There are four different techniques most popularly used in bat detectors.
- Heterodyne: rather like tuning a radio, pitch is reduced without slowing the call down.
- Time expansion: chunks of data are slowed down to human audible frequencies.
- Frequency division: uses a digital counter IC to divide the frequency down in real time.
- Full spectrum: the full acoustic spectrum is recorded as a wav file.
Fortunately, recent advances in technology have now enabled manufacturers to produce relatively cheap full spectrum devices, which give the best resolution and the best chances of identifying the actual bat species.
DIY bat detectors tend to be of the frequency division type and are great for helping spot bats emerging from buildings. An audible noise from a speaker or headphones can prompt us to confirm that the fleeting black shape that we glimpsed was actually a bat and not a moth in the foreground. I used one of these detectors in conjunction with a video recorder to confirm that a bat was indeed NOT exiting from an old chimney pot. Phew!The Technology
A great example of open source collaboration and iteration in action, the Ardubat was first conceived by Frank Pliquett and then expanded on by Tony Messina and more recently, simplified by Service Kring (PDF).
The Ardubat is a frequency division detector based on a TI CD4024 chip, fed by two LM386 amps. Bat detections are sent to an SD card which can be analysed afterwards to try and get some idea of the species. However, since this circuit works by pre-distorting the analog signal into a digital one and then dividing down, none of the amplitude information makes it through.BAT DETECTOR 2015, simplified version of Ardubat developed by Service Kring.
The Bat Detector 2015 is again based on the CD4024, but uses a compact four channel amp, the TL074CNE4. Three of the channels feed the frequency divider chip and the fourth is a headphone amplifier. It’s a very neat design and the signal LED is fed directly from the CD4024. It comes as a complete DIY soldering kit for about $10 including postage. Yes …. $10 !!!
One of the biggest limitations with these detectors is the ultrasonic sensors themselves, which typically have a frequency response similar to the curve shown here. More recently, ultra-wide range MEMS SMT microphones have been released by Knowles, which work well right up to 125,000 Hz and beyond! Some bats, most notably the Lesser Horseshoe, can emit calls of up to 115,000 Hz. However, these older style sensors are incredibly good at detecting about 90% of the bats found here in the UK and are much more sensitive than heterodyne detectors.
The ‘professional’ option that I chose was the UltraMic384 by Dodotronics , which uses the Knowles electret FG23629 microphone with a 32-bit integrated ARM Cortex M4 microcontroller, capabable of recording up to 192,000 Hz in the audio spectrum. There are also some good DIY Hacker options such as the Audio Injector Ultra 2 for the Raspberry Pi, which can record at up to 96,000 Hz — but this is not quite good enough for all bats. Be aware that sampling rate is twice the audio frequency which can be quite confusing. An UltraMic sampling at 384 KB/s will record at 192 KHz.
These types of Full Spectrum devices can produce high resolution sonograms, or spectrograms using Audacity software. This is very helpful for wildlife enthusiasts who want to know what the actual bats species is, although even with the best tech, it’s still sometimes very difficult or impossible to determine species, especially within the Myotis genus.
So now we are fully equipped to check for bats in the derelict building using the DIY detector in conjunction with a video camera and a few pairs of human eyeballs. The full spectrum detector will be set to record right through the night and be used to check if there’s any activity we might have missed and tell us at the very least what genus the bats are.
All we need now is some Machine Learning to automatically identify the species. ML is a new frontier for bat detection, but nobody has yet produced a reliable system due to the similarity in the calls of different species. We know neural networks are being applied to recognize elephant vocalizations and the concept should be applicable here. A future project for an intrepid hacker? As for the Ardubat – it’s crying out for a better microphone, if not the expensive FG23629 then the 50 cent Knowles SMT SPU0410LR5H, which also has a great frequency response curve.
[Main image: Myotis bechsteinii by Dietmar Nill CC-BY-SA 2.5]
Leaving no stone unturned in his quest for alternative and improbable ways to generate lift, [Tom Stanton] has come up with some interesting aircraft over the years. But this time he isn’t exactly flying, with this unusual Coandă effect hovercraft.
If you’re not familiar with the Coandă effect, neither were we until [Tom] tried to harness it for a quadcopter. The idea is that air moving at high speed across a curved surface will tend to follow it, meaning that lift can be generated. [Tom]’s original Coandă-copter was a bit of a bust – yes, there was lift, but it wasn’t much and wasn’t easy to control. He did notice that there was a strong ground effect, though, and that led him to design the hovercraft. Traditional hovercraft use fans to pressurize a plenum under the craft, lifting it on a low-friction cushion of air. The Coandă hovercraft uses the airflow over the curved hull to generate lift, which it does surprisingly well. The hovercraft proved to be pretty peppy once [Tom] got the hang of controlling it, although it seemed prone to lifting off as it maneuvered over bumps in his backyard. We wonder if a control algorithm could be devised to reduce the throttle if an accelerometer detects lift-off; that might make keeping the craft on the ground a bit easier.
As always, we appreciate [Tom]’s builds as well as his high-quality presentation. But if oddball quadcopters or hovercraft aren’t quite your thing, you can always put the Coandă effect to use levitating screwdrivers and the like.
It should probably go without saying that the main reason most people buy an electric vehicle (EV) is because they want to reduce or eliminate their usage of gasoline. Even if you aren’t terribly concerned about your ecological footprint, the fact of the matter is that electricity prices are so low in many places that an electric vehicle is cheaper to operate than one which burns gas at $2.50+ USD a gallon.
Another advantage, at least in theory, is reduced overal maintenance cost. While a modern EV will of course be packed with sensors and complex onboard computer systems, the same could be said for nearly any internal combustion engine (ICE) car that rolled off the lot in the last decade as well. But mechanically, there’s a lot less that can go wrong on an EV. For the owner of an electric car, the days of oil changes, fouled spark plugs, and the looming threat of a blown head gasket are all in the rear-view mirror.
Unfortunately, it seems the rise of high-tech EVs is also ushering in a new error of unexpected failures and maintenance woes. Case in point, some owners of older model Teslas are finding they’re at risk of being stranded on the side of the road by a failure most of us would more likely associate with losing some documents or photos: a disk read error.Linux Loudly Logging
Much like the rockets and spacecraft of sister company SpaceX, Tesla’s vehicles are powered by Linux running on what’s essentially off-the-shelf computing hardware. Until 2018 the Model S and X were running the open source operating system on a NVIDIA Tegra 3, at which point they switched the Media Control Unit (MCU) over to an Intel Atom solution. In either event, the Linux system is stored on an embedded Multi-Media Controller (eMMC) flash chip instead of a removable storage device as you might expect.Tegra module from a pre-2018 Tesla MCU
Now under normal circumstances, this wouldn’t be an issue. There are literally billions of devices running Linux from an eMMC chip. But any competent embedded Linux developer would take the steps necessary to make sure the operating system’s various log files are not being written to a non-replaceable storage device soldered onto the board
Unfortunately, for reasons that still remain somewhat unclear, the build of Linux running on the MCU is doing exactly that. What’s worse, Tesla’s graphical interface appears to be generating its own additional log messages. Despite the likelihood that nobody will ever actually read them, for every second a Tesla is driving down the road, more lines are being added to the log files.
Now, it appears that the near continuous writing of data to the eMMC chips on the older Tegra-based MCUs has finally started to take its toll. Owners on Tesla forums are reporting that their MCUs are crashing and leaving the expensive vehicles in “Limp Home Mode”, which allows the car to remain drivable but unable to charge. The prescribed fix for this issue by Tesla is a complete MCU replacement at the cost of several thousand dollars. As this failure will almost certainly happen after the factory warranty has lapsed, the owner will have to foot the bill themselves.We’re Gonna Need a Bigger Chip
Generally speaking, each block of a flash device can only be written to a few thousand times. So to extend their usable lifespan, when data is written to the drive it will be essentially be moved around the physical device in a process known as wear leveling. Because this additional wrinkle is specific to flash, it took some time to refine the controllers and make the necessary adjustments to modern journaling file systems to accomodate the new storage medium. But today, these issues are largely resolved and not something most users need to be concerned with.
Unfortunately for Tesla, it seems that the eMMC chips on the Tegra modules are simply too small to hold the latest release of the firmware while still leaving enough free blocks on the chip to enable effective wear leveling. With only a small section of the eMMC left available, the system has no choice but to reuse the same blocks over and over.
According to Phil Sadow, who for the last few years has been providing repair services where Tesla won’t, the official fix for the problem on the newer Intel boards was to simply give them a larger eMMC. This will keep more free blocks available so the drive will be able to perform wear leveling, but he says that Tesla still hasn’t fixed the underlying issue of the Linux operating system continually churning out log entries. Given the ever-growing amount of software being pushed to the vehicles through over-the-air updates, the problem may eventually hit these newer MCUs as well.Avoiding the Obvious
For those with even a moderate amount of experience with embedded Linux, the solution to this problem seems painfully obvious. Either redirect the log files to RAM so they’re never written out to the storage device, or just disable logging all together. It’s a trick that even the Raspberry Pi community is well acquainted with and was even used to squeeze more battery life out of laptops in the old days of spinning rust, so how could it be that Tesla’s engineer’s aren’t doing the same?
The simple answer is that we just don’t know. One theory is that Tesla wants to make sure all possible data is stored to a non-volatile device so it will be available in the event of a crash. As they continue to refine their self-driving technology, data recovered from wrecked vehicles is of exceptional value to the automaker. But Phil notes that in the new Intel MCU, normal vehicle diagnostic information is being stored on an SD instead of the eMMC; and more importantly, it seems Linux log entries would be of limited use in an accident investigation anyway.
For now, owners of pre-2018 Model S and X vehicles don’t seem to have many options. The Tegra board can be removed from the MCU and logging can be disabled, but naturally such modifications could put you in hot water with Tesla. The alternative is to wait until the eMMC chip has breathed its last and begrudgingly pay Tesla to repair an issue that ultimately they’re responsible for causing. It might not be the head gasket of old, but it seems even electric vehicles have a few expensive gremlins lurking under the hood.
When life gives you lemons, you make lemonade. At least that’s what the [Sprice Machines] thought when they decided to turn a house into the set of a 9-minute long Rube Goldberg machine to make lemonade. (Video embedded below.) The complex chain reactions runs across multiple rooms, using everyday objects like brooms and even a vibrating smartphone to transfer energy across the complex contraption.
While the team professionally builds Rube Goldberg machines for clients, the Lemonade Machine looks surprisingly organic, like something a family might decide to do for fun over a long weekend (although there area few moments that make you question just how they were able to perfectly time every sequence in the chain reaction). Even though the actual lemonade making only takes up a small fraction of the machine, watching marble runs, weights dashing across a clothesline, and random household items repurposed into energy transfer mechanisms is really entertaining.
The [Sprice Machines] have been making Rube Goldberg machines for quite some time, posting the videos of their final runs on YouTube. Other builders for the Lemonade Machine included [Hevesh5], [DrComplicated], [DoodleChaos], [TheInvention11], [5MadMovieMakers], and [SmileyPeaceFun].
[Thanks Itay for the tip!]
Jibo, the adorable robot made by Jibo, Inc., was getting phased out, but that didn’t stop [Guilherme Martins] from using his robot companion for one last hack.
When he found out that the company would be terminating production of new Jibos and shutting down their servers, he wanted to replace the brain of the robot so that it would continue to live on even after all of its software had become deprecated. By the time the project started, the SDK downloads had already been removed the from developer’s site, so they looked at other options for controlling Jibo.
The first challenge was to not break the form factor in order to disassemble Jibo. They only managed to remove the battery from the bottom, realizing that the glass frame held the brain room. From within the robot, they were able to find the endless rotation joint for the head and the heart of the electronics. Jibo uses a DC motor, encoder, and IR sensor at each of three distinct levels to detect reference points.
They decided to use Phidgets modules to interface with these devices. While the DC motor controller handles 2A and has an encoder port, the Phidgets are able to provide software with the encoder and PID built-in. The 4x Digital Input Module was used for detecting the IR switch and connecting the modules to the computer.
[Martins] decided to use LattePanda, a hackable Windows 10 development board, for the brain of the new Jibo. The board was luckily able to fit inside the compartment for Jibo, but since it requires more power the unit is powered with 12V regulated to 5V in order to have less current passing through the wires. The DC motors, meanwhile, run at 12V and the IR switches and encoders at 5V.
A program developed in Unity3D plays the eye animations, and a C# program interfaces with the Phidgets. The final configuration was to fit Jibo onto a robotic arm to augment its behaviors. We previously wrote about Toppi, the robotic arm artist, that was used as the base for Jibo’s new home.
You can check out the result in the video below.
An anonymous (for reasons that will be obvious pretty soon) commenter left a gem on my Disaster Recovery Test Faking blog post that is way too valuable to be left hidden and unannotated.
Here’s what he did:
Once I was tasked to do a DR test before handing over the solution to the customer. To simulate the loss of a data center I suggested to physically shutdown all core switches in the active data center.Read more ...
Transformers are deceptively simple devices. Just coils of wire sharing a common core, they tempt you into thinking you can make your own, and in many cases you can. But DIY transformers have their limits, as [Great Scott!] learned when he tried to 3D-print his own power transformer.
To be fair, the bulk of the video below has nothing to do with 3D-printing of transformer coils. The first part concentrates on building transformer cores up from scratch with commercially available punched steel laminations, in much the same way that manufacturers do it. Going through that exercise and the calculations it requires is a great intro to transformer design, and worth the price of admission alone. With the proper number of turns wound onto a bobbin, the laminated E and I pieces were woven together into a core, and the resulting transformer worked pretty much as expected.
The 3D-printed core was another story, though. [Great Scott!] printed E and I pieces from the same iron-infused PLA filament that he used when he 3D-printed a brushless DC motor. The laminations had nowhere near the magnetic flux density of the commercial stampings, though, completely changing the characteristics of the transformer. His conclusion is that a printed transformer isn’t possible, at least not at 50-Hz mains frequency. Printed cores might have a place at RF frequencies, though.
In the end, it wasn’t too surprising a result, but the video is a great intro to transformer design. And we always appreciate the “DIY or Buy” style videos that [Great Scott!] does, like his home-brew DC inverter or build vs. buy lithium-ion battery packs.
Some 3D printers will give you prints with surfaces resembling salmon skin – not exactly the result you want when you’re looking for a high-quality print job. On bad print jobs, you can usually notice that the surface is shaking – even on the millimeter scale, this is enough to give the print a bumpy finish and ruin the quality of the surface. TL smoothers help with evening out the signal going through stepper motors on a 3D printer, specifically the notoriously noisy DRV8825 motor drivers.
Analyzing the sine wave for the DRV8825 usually shows a stepped signal, rather than a smooth one. Newer chips such as the TMC2100, TMC2208, and TMC2130 do a much better job at providing smooth signals, as do cheaper drivers like the commonly used A4988s.
[Fugatech 3D Printing] demonstrates some prints from a D-Force Mini with an MKS Base 1.4 smoother-based control board, which is easier to use and smarter than Marlin. On the two prints using smoothers, one uses a board with four diodes, while the other was printed with a board with eight diodes. [Mega Making] compares how the different motor drivers work and experimentally shows the stuttering across the different motors before and after connecting to the smoothers.
The yellow and pink traces are the current for each phase of the motor. The blue and green traces are the voltages on each terminal of the phase with the yellow current. [via Schrodinger Z]A common problem with DRV8825 motors is their voltage rating, which is lower than most supplies. When a 3D printer is moving slower than 100mm/min, the motor is unable to move smoothly.
[Schrodinger Z] does a bit of digging into the reason for the missing microsteps, testing out different decay modes in DRV8825s and why subharmonic oscillations occur in the signals from the motor.
The driver consequently has a “dead zone” where it is unable to produce low currents. Modifying the motor by offsetting the voltage by 1.4V (the point where no current flow) would allow the dead zone to be bridged. This also happens to be the logic behind the design for smoothers, although it is certainly possible to use different diodes to customize the power losses depending on your particular goal for the motor.
Debugging signal problems in a 3D printer can be a huge headache, but it’s also gratifying to understand why microstepping occurs from current analysis.
[Thanks Keith O for the tip!]
Automakers continue to promise that fully autonomous cars are around the corner, but we’re still not quite there yet. However, there are a broad range of driver assist technologies that have come to market in recent years, with lane keeping assist being one of them. [raja_961] decided to implement this technology on an RC car, using a Raspberry Pi.
A regular off-the-shelf RC car is used as the base of the platform, outfitted with two drive motors and a third motor used for the steering. Unfortunately, the car can only turn either full-left or full-right only, limiting the finesse of the steering. Despite this, the work continued. A Raspberry Pi 3 was fitted out with a motor controller and camera, and hooked up to the chassis. With everything laced up, a Python script is used along with OpenCV to run the lane-keeping algorithm.
[raja_961] does a great job of explaining the lane keeping methodology. Rather than simply invoking a library and calling it good, instead the Instructable breaks down each stage of how the algorithm works. Incoming images are converted to the HSL color system, before a series of operations is used to pick out the apparent slope of the lane lines. This is then used with a PID algorithm to guide the steering of the car.
It’s a comprehensive explanation of a basic lane-keeping algorithm, and a great place to start if you’re interested in learning about the technology. There’s plenty going on in the world of self-driving RC cars, you just need to know where to look! Video after the break.
Light painting has long graced the portfolios of long-exposure photographers, but high resolution isn’t usually possible when you’re light painting with human subjects.
This weekend project from [Timmo] uses an ESP8266-based microcontroller and an addressable WS2812-based LED strip to paint words or custom images in thin air. It’s actually based on the Pixelstick, a tool used by professional photographers for setting up animations and photorealism shots. The equipment needed for setting up the light painting sticks runs in the order of hundreds, not to mention the professional camera and lenses needed. Nevertheless, it’s a huge step up from waving around a flashlight with your friends.
The LED Lightpainter takes the Pixelstick a few notches lower for amateur photographers and hobbyists. It directly supports 24-bit BMP, with no conversion needed. Images are stored internally in Flash memory and are uploaded through a web interface. The settings for the number of LEDs, time for the image row, and STA/AP-mode for wireless connections are also set by the web interface. The project uses the Adafruit NeoPixel, ArduinoJson, and Bodmer’s TFT_HX8357 libraries for implementing the BMP drawing code, which also allows for an image preview prior to uploading the code to the microcontroller. Images are drawn from the bottom row to the top, so images have to be transformed before updating to the LED painter.
Some future improvements planned for the project include TFT/OLED support, rainbow or color gradient patterns in the LEDs, and accelerometer or gyroscope support for supporting animation.
There aren’t currently too many galleries of DIY LED-enabled light paintings, but we’d love to see some custom modded light painting approaches in the future.
This isn’t the first LED light stick we’ve seen, if you’re interested in such things.