Camera attachment jolts a shutterbug into snapping "better" photos
There's nothing more frustrating than missing an ideal photo opportunity. That may not be a problem if using a camera attachment and grip built by Peter Buczkowski which sends an electric shock to the photographer when its AI-trained brain determines that a scene out front is sufficiently beautiful. Though beauty most definitely is in the eye of the AI beholder.
Buczkowski's Prosthetic Photographer creation is one of three projects for his master thesis – "Experiments in Human Computer Interaction through electrical body part stimulation." The modular attachment is reported compatible with any mirrorless or DSLR camera, and could mean that a photographer needn't worry about being a skilled scene snapper. The shutterbug can just wander around pointing the camera in any direction and only the best scenes will be captured. At least that's the theory.
Inside the box mounted above the camera is a Raspberry Pi 3 that's been trained using an image classifier script which constantly monitors the scene out front. When the software gets a 95 percent hit for what it judges to be a high quality picture opportunity – based on a dataset of over 17,000 high or low quality images compiled by Google's Inception Model – it sends a signal to a TENS (transcutaneous electrical nerve stimulation) module powered by a 9 V battery.
This in turn sends a small electric pulse to the photographer via two electrodes on the grip handle and causes the index finger to twitch. The intensity of the jolt can be controlled using knobs on the back, though the shock needs to be strong enough to cause the user to push the button to the front of the handle. That button then signals the camera shutter to fire and the high quality image is captured. A rechargeable power bank sits under the grip handle to power the Pi mini computer.
The camera hosting the system will need to be in auto mode and have continuous autofocus running to ensure the shockingly beautiful images captured are not blurred or over/under-exposed. It's not entirely clear how the system compensates for the very likely camera shake in play when the user gets zapped, but Buczkowski says that the end results should make "everyone using it an as good [a] photographer as the data it was trained on."
Judging by the composite of images showcased above, we'd say that the results are a tad hit and miss and suggest that the system needs a little more training. We can't see such a device ever making it to production, but if you want to see the Prosthetic Photographer in action you can get a brief taste in the video below.
Source: Peter Buczkowski