A song by Rockwell, “Somebody’s Watching Me” might be the anthem for the tin foil hat crowd. But a new paper reveals that it might be just as scary to have someone listening to you. Researchers have used common microphones to listen in on computer monitors. The demonstration includes analyzing audio to determine input from virtual keyboards and even a way to tell if people are surfing the web during a Google Hangout session.
Reading monitors based on electronic emissions is nothing new — ask Wim van Eck or read about TEMPEST. What makes this worrisome is that we constantly have live microphones around our computers. Webcams, phones, the latest smart assistant. Even some screens have built-in microphones. According to the paper, you could even pick up data from recorded audio. The paper has three main goals: extract display text, distinguish between different websites on screen, and extracting text entered with a virtual keyboard.
The analysis looked at 31 different screens. There were 12 distinct models from 6 different vendors. They did use a special VGA cable to tap the vertical sync to help manage the data, but they claim this was only an aid and not essential. They also used a high-end sound setup with a 192 kHz sampling rate.
Measuring the sound made by different display patterns was empirical. The authors think the mechanism is from subtle changes in the vibrations of the power supply components due to changes in current consumption. The refresh rate of the monitor also plays a part.
Armed with the proof of concept, the team went on to use an LG V20 cellphone and via a Hangouts call. Imagine if the person on the other end of your call could tell when you were reading Hackaday instead of paying attention to the call.
Different types of monitors need to be learned for best accuracy. It appears that reading small text may have problems, too. Even website detection depends on training. Still, maybe the tin hat people aren’t exactly wrong.
If you want to try your hand at reading the RF emissions, software defined radio is your friend. We’ll be interested to see if anyone duplicates the acoustic method in this paper, though.
A great many robots exist in our modern world, and the vast majority of them are highly specialized machines. They do a job, and they do it well, but they don’t have much of a personality. [Guilherme Martins] was working on a fun project to build a robot arm that could create chocolate artworks, but it needed something to humanize it a bit more. Thankfully, Jibo was there to lend a hand.
For the uninitiated, Jibo was a companion robot produced by a startup company that later folded. Relying on the cloud meant that when the money ran out and the servers switched off, Jibo was essentially dead. [Guilherme] managed to salvage one of these units, however, and gave it a new life.
With the dead company unable to provide an SDK, the entire brains of the robot were replaced with a LattePanda, which is a Windows 10 single-board computer with an integrated Arduino microcontroller. This was combined with a series of Phidgets motor drivers to control all of Jibo’s joints, and with some Unity software to provide the charming expressions on the original screen.
With the Jibo body mounted upon the robot arm, a simple chocolate-decorating robot now has a personality. The robot can wave to humans, and emote as it goes about its day. It’s an interesting feature to add to a project, and one that certainly makes it more fun. We’ve seen projects tackle similar subject matter before, attempting to build friendly robot pets as companions. Video after the break.
You’ve perhaps noticed that [Jeremy Cook] is rather prolific on YouTube, regularly putting out videos on his latest and greatest creations. He wanted to add a head-mounted GoPro to his video production bag of tricks, but found it was a little trickier than expected to get the camera to point where he was actually looking. The solution? A 3D printed laser “sight” for the GoPro that let’s him zero it in while creating videos.
The idea here is very simple: put a small laser module on the same mount as the GoPro itself so you’ll have a handy red dot showing more or less where the camera is looking. The position of the red dot relative to the center-point of the camera’s field of view is going to vary slightly with range, but with something like a GoPro that’s shooting a very wide area to begin with, it’s not really a problem in practice.
Sounds like a good idea, but won’t that leave a weird red dot in all the videos? [Jeremy] is already ahead of you there, and added a small push button switch to the front of the module so he can quickly and easily turn the laser on and off. The idea is that he turns the laser on, gets the dot roughly where he wants the camera pointed, and then turns it back off.
[Jeremy] has put the STL files for the single-piece 3D printed module up on his GitHub for anyone who might find them useful. Besides the printed part, you just need to provide a suitably sized 3.7 V LiPo battery and the laser diode itself. If you need to find a good supply of cheap lasers, you might want to check the clearance rack at the big box store.
Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.
Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.
Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.
[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.
At first, we thought this robot was like a rabbit until we realized rabbits have a 300% bonus in the leg department. SALTO — a robot from [Justin Yim], [Eric Wang], and [Ronald Fearing] only has one leg but gets around quite well hopping from place to place. If you can’t picture it, the video below will make it very obvious.
According to the paper about SALTO, existing hopping robots require external sensors and often are tethered. SALTO is self-contained. The robot weighs a tenth of a kilogram and takes its name from the word saltatorial (adapted for leaping ) which itself comes from the Latin saltare which means to jump or leap.
The robot considers itself in four distinct modes: stance is when it is standing on the ground, liftoff is when it is launching itself, flight is in the air, and touchdown is when it reconnects with the ground. Balancing the robot during stance is old hat, of course. But upon liftoff, the robot computes an error term for the velocity and uses that to compute a correction value. The robot has a tail and two small propellers to control its attitude.
At the start, the robot balances on three points: its toe, its rear ankle, and one end of its tail. Using gyros, it is able to set initial values. It then stands up in different poses and uses the thrusters to zero out any roll and pitch.
We were not far into the video before we wondered if the beastie could climb stairs. It can’t. According to the authors say that estimate errors mean the foot can move up to a half meter away from where you wanted it to land. However, they believe future versions will have improved estimation that would let it climb stairs, leap over furniture or other obstacles, and handle a variety of terrain. We only hope they print the poor thing a kangaroo body.
Jumping robot always brings back our nightmares of Atlas breaking down our bedroom door. He has no problem with stairs. We’ve also seen a prototype lunar rover that can jump over things, even though that’s not its primary mode of locomotion.
Sometimes, there’s a job to be done and the required tools don’t fall easily to hand. [Bob] found himself in just such a position, needing to get some window flashing made up despite lacking a sheet metal break. After waiting far too long for someone else to do the job, [Bob] elected to simply make the tools and do it himself instead (Youtube link, embedded below).
The project came about simply because [Bob] needed to bend 42″ sections of flashing, and couldn’t find a decent deal on a sheet metal brake above 36″ wide. The build starts with some angle iron and simple hinges, bolted together to form a basic brake design. With some rectangular hollow section bolted on for handles, the brake is then clamped to the bench and is ready for action.
It’s a build that any experienced hacker could whip up in an afternoon and be pumping out basic sheet metal parts by sundown, and requires no welding to boot. To learn more about bending sheet metal, check out our primer on the subject. Video after the break.
Kate Matsudaira wrote a nice article explaining how to deal with emotional attachment to a project you spent a lot of time working on. While she's focusing on software development, the same fallacies apply to networking - sometimes it's time to let the old pile of **** die and replace it with something created in this decade.
There are plenty of techniques and components that we use in our everyday hardware work, for which their connection and coding is almost a done deal. We are familiar with them and have used them before, so we drop them in without a second thought. But what about the first time we used them, we had to learn somewhere, right? [TheMagicSmoke] has produced just what we’d have needed then for one component that’s ubiquitous, the I2C EEPROM.
These chips provide relatively small quantities of non-volatile memory storage, and though they are not the fastest of memory technologies they have a ready application in holding configuration or other often-read and rarely written data.
Since the ST24C04 512-byte device in question has an I2C bus it’s a straightforward add-on for an Arduino Mega, so we’re shown the wiring for which only a couple of pull-down resistors are required, and some sample code. It’s not the most complex of projects, but it succinctly shows what you need to do so that you too can incorporate an EEPROM in your work.
If learning about I2C EEPROMs piques your interest, perhaps you’d like to read a previous look we made at them.
Underfloor heating is a wonderfully luxurious touch for a bedroom and en-suite bathroom, and [Andy] had it fitted so that he could experience the joy of walking on a toasty-warm floor in the morning. Unfortunately after about a year it stopped working and the culprit proved to be its thermostat. A replacement was eye-wateringly expensive, so he produced his own using an ESP8266-powered Sonoff wireless switch.
The thermostat has a thermistor as its temperature sensor, embedded in the floor itself. This could be brought to the ESP’s solitary ADC pin, but not without a few challenges along the way. The Sonoff doesn’t expose the pin, so some very fine soldering was the first requirement. A simple voltage divider allowed the pin to be fed, but through it he made the unfortunate discovery that the ESP’s analogue input has a surprisingly low voltage range. A new divider tying it to ground solved the problem, and he was good to go.
Rather than using an off-the-shelf firmware he created his own, and with a bit of board hacking he was able to hard wire the mains cabling and use one set of Sonoff terminals as a sensor connector. The whole fit neatly inside an electrical fitting box, so he’s back once more to toasty-warm feet.
This isn’t the first ESP thermostat we’ve featured, nor will it be the last. Here’s a particularly nice build from 2017.
We like to pretend that wires are perfect all the time. For the most part that’s acceptable, but sometimes you really do care about those tiny fractional ohm quantities. Unfortunately though, most meters won’t read very low values. There are tricks you can use to achieve that aim, such as measuring low currents through a device with a known voltage applied. It is handier though to have an instrument to make the reading directly, and [Kasyan TV] did just that with a surprisingly low part count.
The whole thing is built from an LM317, a resistor, and a voltmeter module, that’s it. [Kasyan] mentions the meter’s accuracy means the lower digits are not meaningful, but it looks to us as though there are other sources of error — for example, there’s no way to zero out the probe’s resistance except during the initial calibration.
This isn’t going to be perfect — you’d do better with a 4-wire measurement and a way to zero set shorted probes. However, it does seem to work well enough and it is a simple, but useful, project.
Our favorite quote is this one:
The case itself was printed on a 3D printer. It turned out ugly and not neat, but I don’t care much.
Sounds like a good way to think about it.
Summer in the Northern hemisphere means outdoor cooking. Matches are old school, and you are more likely to use a piezoelectric lighter to start your grill. [Steve Mould] has one, but he didn’t understand the physics behind why it works, so he decided to do the research and share it in a video.
The first two minutes is a recap of things you already know. But after that [Steve] gets into the crystal lattice structure of quartz. Using some computer animations and some peanut butter lids he shows you exactly why compressing the crystal generates electricity.
If you think you don’t care about barbecue lighters, [Steve] reminds you that the same effect is what makes the quartz crystals we use in crystal oscillators work. In particular, he looks at how a little crystal that vibrates at 32,768 Hz can make a watch.
We really liked the steel rule demonstration in the second video. It would be a great thing to do for a science class. The fact that the crystals replaced actual tuning forks inside watches shows how far things have changed in the last 50 years.
If you think you know about flip flops, by the way, you may find the end of the second a bit surprising, and quite the unusual use of flip flops.
X-acto knives are popular as the scalpel of the craft world. Obviously, holders for the blades are available off-the-shelf, but you needn’t settle for store bought. [Ariel Yahni] set about making an X-acto handle of their own, and it shows just how quick and easy making your own tools can be.
The blades are first measured to determine the appropriate dimensions for the holder. With this done, the basic shape of the handle is drawn up in CAD software using simple primitive shapes and lines. Then it’s just a simple matter of jigging up a piece of aluminium stock in the CNC machine, and letting it do its thing.
The final result needs minimal finishing – primarily just an inspection of the parts, minor deburring and the drilling and tapping of the mount holes. With a couple of socket head cap screws and an X-acto blade installed, it’s ready for work.
We see a lot of interesting tool builds around these parts. You might consider making your own ultrasonic cutter if you’re regularly finishing 3D printed parts. Video after the break.
Over the last few years the open-source RISC-V microprocessor has moved from existing only on FPGAs into real silicon, and right now you can buy a RISC-V microcontroller with all the bells and whistles you would ever want. There’s an interesting chip from China called the Sipeed M1 that features a dual-core RISC-V core running at 600MHz, a bunch of I/Os, and because it’s 2019, a neural network processor. We’ve seen this chip before, but now Seeed Studios is selling it as a Raspberry Pi Hat. Is it an add-on board for a Pi, or is it its own standalone thing? Who knows.
The Grove AI Hat for Edge Computing, as this board is called, is built around the Sipeed MAix M1 AI Module with a Kendryte K210 processor. This is a dual-core 64-bit RISC-V chip and it is obviously the star of the show here. In addition to this chip you’ve also got a few Grove headers for digital I/O, I2C, PWM, and a UART. There’s a a USB Type C for power (finally we’re getting away from USB micro power plugs), and of course a 40-pin Raspberry Pi-style header.
This board is essentially a breakout board for the Sipeed M1 chip, which is one of the most interesting new microcontrollers we’ve seen since it launched late last year. There’s a lot of power here, and already people are emulating the Nintendo Entertainment System on this chip with great success. The problem with this chip is that apart from making your own breakout board, there aren’t many options to get it up and running quickly. This is the solution to that; at the very least it’s a Sipeed chip on a board with a power supply, and it’s also a co-processor that can be accessed with Linux and a Raspberry Pi.
The ESP32 is well known for both its wireless communication abilities, as well as the serious amount of processing power it possesses for a microcontroller platform. [Robert Manzke] has leveraged the hardware to produce a Eurorack audio synthesis platform with some serious capabilities.
Starting out as a benchmarking project, [Robert] combined the ESP32 with an WM8731 CODEC chip to handle audio, and an MCP3208 analog-to-digital converter. This gives the platform stereo audio, and the ability to handle eight control-voltage inputs.
The resulting hardware came together into what [Robert] calls the CTAG Strämpler. It’s a sampling-based synthesizer, with a wide feature set for some serious sonic fun. On top of all the usual bells and whistles, it features the ability to connect to the freesound.org database over the Internet, thanks to the ESP’s WiFi connection. This means that new samples can be pulled directly into the synth through its LCD screen interface.
With the amount of power and peripherals packed into the ESP32, it was only a matter of time before we saw it used in some truly impressive audio projects. It’s got the grunt to do some pretty impressive gaming, too. Video after the break.
The HackadayPrize2019 is Sponsored by:
When Maker Faire Bay Area closed down early Saturday evening, the fun did not stop: there’s a strong pool of night owls among the maker demographic. When the gates close, the after-parties around San Mateo run late into the night, and Hackaday’s meetup is a strong favorite.
This year Hackaday and Tindie joined forces with Kickstarter and moved our combined event to B Street Station, a venue with more space for hacks than previous years. The drinks started flowing, great people started chatting, basked in an ever present glow of LEDs. A huge amount of awesome hardware showed up, so let’s take a look the demos and stunts that came out to play.
While we hosted many Maker Faire veterans as repeat visitors, we also gladly welcomed newcomers curious to see what people have created. [Sam Freeman] was one of many who were happy to break down their projects to beginner-friendly pieces. His LED goggles started from an Adafruit tutorial and continually evolved to the version today. [Daniel Young] was equally friendly explaining his magical Melty Cube spinning on the end of this LED wizard’s staff. It features two ESP8266 working together to avoid the need for an expensive slip ring: one lives inside the cube driving LEDs, the other handles the spinning motor. Being welcoming, approachable, and willing to explain is how we grow our community.
[Garrett] is a man with many ideas, judging by the number of illuminated light bulbs over his head. Built as a quick hack for a disco party, the sound-reactive electronics were repurposed from a pair of LED glasses. (Look at how the USB power enters the helmet.) WS2812 LEDs were wired inside plastic light-bulb-shaped party favors. It doesn’t get as hot as we might think, but it is heavier than it looks. In hindsight [Garrett] wished he hadn’t cut off the helmet’s original chin strap early in this project.
It’s not always about the LEDs, though! [Scorch] brought two of his String Shooters whose instructions have been posted online. Building one will depend on size of available traction material on the wheels. His large unit was built around a bicycle inner tube, and the smaller one (video) built around rubber bands used in dental braces. In addition to the two string shooters, he also brought an ultrasonic levitator based on instructions posted by Make. It’s one thing to read about them online, it’s quite another to see a small flake of aluminum foil actually float in the air in front of us.
Similarly, it was quite a treat to see tiny Femtobeacons in person. Yes, I can show you a picture of one next to a U.S. quarter dollar coin for scale, but it’s not the same as holding one in my hands hoping not to drop it on the floor of a dark bar. Creator [Femtoduino] documented the project on Hackaday.io and also sells units on Tindie. Other Tindie sellers were present, some brought their products for sale and others brought recent projects just for fun.
[Maniacal Labs] brought a large 7-segment display previously seen at KiCon. This single digit represents a conversation piece as well as proving concept for a project which will feature 12 of these digits.
[Luther Johnson] brought a full MakerLisp setup: a tiny portable embedded Lisp computer complete with USB keyboard and VGA monitor.
A pair of mostly 3D-printed Mars rover models were on display, one by yours truly (wearing a raincoat) and a success story of sharing online. [Marco] found the Hackaday.io project page and built his yellow rover incorporating his own customizations. Maker Faire was the first real world meeting of these rover siblings and their creators.
Our Hackaday Prize this year encourages participants to work through how their idea can scale beyond single prototypes to volume production. We believe putting them in people’s hands will multiply our collective ability to improve the world. So it was ideal for us to join forces with Kickstarter for this event, inviting product teams seeking funding via Kickstarter. These innovators have put a great deal of thought into their ideas, and this is an opportunity for us to learn from them.
One of the Kickstarter projects present is Kinazium, offering educators a construction set for building small robot mazes fit for palm-sized robots. Teachers in classrooms appreciate being able to get maze parts without scissors or knives, and put them together without tape or glue.
Another Kickstarter project team present is Chatterbox, offering the technology of smart speakers like Amazon Echo and Google Home but without the retail advertising surveillance baggage of those popular devices. A cute appearance appeals to children, and a COPPA compliant software stack protects them online. We think these features will also appeal to security-conscious adults.
These and many more projects adorned the tabletops of B Street Station, making it the best place to stop after Maker Faire Saturday. We would like to thank everyone who joined our party. A place to share our work, our curiosity, and our passion to make something cool. We all inspire each other to turn our ideas into reality, and we hope to see everybody (plus new faces too) next year!
Hackaday Podcast Ep20: Slaying The Dragon Of EL, Siege Weapon Physics, Dis-entangled Charlieplex, Laser Internet
Join editors Elliot Williams and Mike Szczys as they unpack all the great hacks we’ve seen this week. On this episode we’re talking about laser Internet delivered from space, unwrapping the complexity of Charlieplexed circuits, and decapping ICs both to learn more about them and to do it safely at home. We have some fun with backyard siege weapons (for learning about physics, we swear!), gambling on FPGAs, and a line-scanning camera that’s making selfies fun again. And nobody thought manufacturing electroluminescent displays was easy, but who knew it was this hard?
Take a look at the links below if you want to follow along, and as always, tell us what you think about this episode in the comments!
Direct download (78 MB of bodacious audio)Places to follow Hackaday podcasts:
- All the Hacks of Hackaday’s Bay Area Maker Faire Meetup
- PokerBot Uses FPGA For Card Calculating Horsepower
- Lateral Thinking For An Easier Charlieplex
- Ben Krasnow Makes a DSKY
- Make Physics Fun with a Trebuchet
- Integrated Circuits Can Be Easy to Understand with the Right Teachers
- A Very Modern Flying Spot Scanner
- Mike’s Picks:
- Elliot’s Picks:
- Everything We Know About SpaceX’s Starlink Network
- Zork And The Z-Machine: Bringing The Mainframe To 8-bit Home Computers
- Earlier in May we heard news that Maker Faire Bay Area is facing a financial crunch. Although the future of the event is unknown, the legacy is easy to see. Maker Faire throughout the world has had a profound effect on countless lives. It’s hard to imagine that it all started with that first event way back in 2006. Mike Szczys spent some time last weekend tracking down as many people who where at one. With only a moment’s notice, these folks each shared a happy memory, and help us picture what it was like at the inaugural Maker Faire.
- Thank you to Lenore Edman for her extensive help in connecting with people who were at the first Maker Faire.
Sometimes, rather than going the commercialistic route, it can be nice to make a gift for that personal touch. [Mahesh Venkitachalam] had been down this very road before, often stumbling over that common hurdle of getting in too deep and missing the deadline of the occasion entirely. Not eager to repeat the mistake, help was enlisted early, and the iCE bling earrings were born.
The earrings were a gift for [Mahesh]’s wife, and were made in collaboration with friends who helped out with the design. The earrings use a Lattice iCE40UP5k FPGA to control an 8×8 grid of SMD LEDs. This is all achieved without the use of shift registers, with the LEDs all being driven directly from GPIO pins. This led to several challenges, such as routing all the connections and delivering enough current to the LEDs. The final PCB is a 4-layer design, which made it much easier to get all the lines routed effectively. A buffer is used to avoid damaging the FPGA by running too many LEDs at once.
It’s a tidy build, which makes smart choices about component placement and PCB design to produce an attractive end result. LEDs naturally lend themselves to jewelry applications, and we’ve seen some great designs over the years. Video after the break.
If you own a desktop 3D printer, you’re almost certainly familiar with Slic3r. Even if the name doesn’t ring a bell, there’s an excellent chance that a program you’ve used to convert STLs into the G-code your printer can understand was using Slic3r behind the scenes in some capacity. While there have been the occasional challengers, Slic3r has remained one of the most widely used open source slicers for the better part of a decade. While some might argue that proprietary slicers have pulled ahead in some respects, it’s hard to beat free.
So when Josef Prusa announced his team’s fork of Slic3r back in 2016, it wasn’t exactly a shock. The company wanted to offer a slicer optimized for their line of 3D printers, and being big proponents of open source, it made sense they would lean heavily on what was already available in the community. The result was the aptly named “Slic3r Prusa Edition”, or as it came to be known, Slic3r PE.
Ostensibly the fork enabled Prusa to fine tune print parameters for their particular machines and implement support for products such as their Multi-Material Upgrade, but it didn’t take long for Prusa’s developers to start fixing and improving core Slic3r functionality. As both projects were released under the GNU Affero General Public License v3.0, any and all of these improvements could be backported to the original Slic3r; but doing so would take considerable time and effort, something that’s always in short supply with community developed projects.
Since Slic3r PE still produced standard G-code that any 3D printer could use, soon people started using it with their non-Prusa printers simply because it had more features. But this served only to further blur the line between the two projects, especially for new users. When issues arose, it could be hard to determine who should take responsibility for it. All the while, the gap between the two projects continued to widen.
With a new release on the horizon that promised to bring massive changes to Slic3r PE, Josef Prusa decided things had reached a tipping point. In a recent blog post, he announced that as of version 2.0, their slicer would henceforth be known as PrusaSlicer. Let’s take a look at this new slicer, and find out what it took to finally separate these two projects.Revamped User Experience
The interface for Slic3r, and by extension Slic3r PE, wasn’t exactly the high water mark in terms of design. It was certainly functional enough, but it was never designed to be pretty. Since Prusa is in the business of selling relatively high-end 3D printers, it’s not hard to see how the spartan look of Slic3r could be a bit of a problem.
So it’s little surprise that the biggest user-facing change in PrusaSlicer is a completely new interface. It’s familiar to long time Slic3r users, but at the same time has a much more modern feel. There’s a greater focus on performing common tasks with vector icons inside the 3D preview view rather than having to dig down into menus to find them. The side panel now has a tabbed layout which allows the user to select how many options they want to see depending on their skill level. In general, PrusaSlicer is now a bit reminiscent of Ultimaker’s Cura slicer; which seems fitting considering both companies are trying to develop easy to use (and support) slicers for their customers.
For long-time Slic3r PE users who might think the new look of PrusaSlicer is just a coat of fresh paint, there are plenty of usability improvements and tweaks you’ll notice while using the new software. For instance, the real-time estimate of print time and cost is a huge improvement over previous versions where you had to slice and export before you’d get that information.A New Approach to Supports
For models with complex geometry, printing support structures is something of a necessary evil. Automatic support generation is a standard feature in every slicer out there, but on some models, it can get confused and produce sub-optimal results. Over the last couple of years, proprietary slicers like Simplify3D have tried to address this by implementing custom support structures that the user can place wherever they want.
The PrusaSlicer approach is something of a middle-ground. Supports are still generated automatically, but the user can easily mask out areas where they don’t want supports to be generated. Alternately, you have the ability to disable automatic support generation except for within specifically designated areas.
In this example, you can see how the automatic support generation would fill the inside of the part with unnecessary and difficult to remove support structures; wasting plastic and making part cleanup harder than it should be. But by designating a specific zone in which support structures should be generated, this issue can be avoided.
Make no mistake, you can get yourself in trouble easily with this function if you’re not fully tuned in to the strengths and weaknesses of your printer. But that’s often the price to pay for this sort of fine-grained control.The Power of Light
With the announcement of their SLA printer last year, Prusa found themselves in need of a slicer that could support this vastly different 3D printing technology. Rather than create a second slicer, they decided to start implementing support for light-based 3D printers directly into what was then still Slic3r PE. Since most of that work happened in the alpha and beta builds of Slic3r PE, PrusaSlicer represents the first time a large portion of this SLA capability has been available in a stable release.
In fact, Josef Prusa claims that upon its release PrusaSlicer immediately became the most polished and complete open source SLA slicer available. That seems a bold claim, but we have to admit we haven’t seen many entries into that particular niche to compare it against. Homebrew SLA printers are far less common than their FDM counterparts, but with the cost of the principle components coming down and now an arms-race in terms of the open source tools to drive them, perhaps that will soon change.In with the New, Out with the Old Simplified settings in PrusaControl
While the core of Slic3r was written in C++, the high-level components including the user interface were done in Perl. According to Josef Prusa, this combination has proven to be a challenge to maintain over the years. Citing difficulty in finding contributors who are well versed in the language, as well as compatibility issues with the wxWidgets user interface library, the decision was made to start rewriting these legacy Perl components in C++.
On the whole this transition has been smooth, but at least one feature did end up on the chopping block because of it: USB printing. If you prefer to keep your printer physically tethered to the computer, you’ll need to stick with Slic3r PE for now. Currently, PrusaSlicer will only generate the G-code. You need to get it onto your printer with something like Printrun, via SD card, or have OctoPrint setup so you can do it over the network.
Your USB cable isn’t the only thing being put out to pasture with the release PrusaSlicer. Not long after creating Slic3r PE, Prusa started work on something of a “Plan B” slicer: PrusaControl. Believing that current slicers were too complex for beginners, PrusaControl was positioned as a bare-bones tool that would get you printing with as little hassle as possible. But with the ability to adjust the interface of PrusaSlicer depending on the user’s skill level, the decision has been made to officially abandon PrusaControl.Looking Ahead
While PrusaSlicer has a new name and a long list of improvements, at its core it still runs on Slic3r. Just as importantly, it’s still released under the same open source license. That means that anyone is free to try their hand at porting over these new features to vanilla Slic3r if they were so inclined. It might be more difficult now that Prusa is on a mission to rid the codebase of Perl, but it’s certainly not impossible. On the flip side, Josef Prusa says his team has every intention of merging in upstream Slic3r fixes and changes, so long as they make sense to include in PrusaSlicer.Stallman Approved
In the long run, this move should end up benefiting Slic3r developers as much as it does anyone at Prusa. For one, the name change should keep them from getting hounded with bug reports that don’t apply to their code. In time, they may even find that with Prusa leading the charge on the user interface side of things, they can focus their efforts on improving the core slicing engine.
One thing is for certain: this is how open source is supposed to work. A successful company taking an open source project, adding their own resources and talent to it, and spinning it off into a new open source project is something worth celebrating. We can argue about the semantics of the name change, and the potential fracturing of the userbase, but in the end the code is out there and the community as a whole stands to benefit no matter who’s name is on the top of the page.