I’ve been watching the evolution of computer vision closely and things are getting VERY interesting right now.

We’ve always wondered when computers can think like humans, and it’s always remained that elusive “20 years away”. To really interface with us, they need some KEY things:


Of those 5 things, the first three are pretty much nailed, #4 is next.

The evidence that #4 is elusive is Siri on the iPhone, or Microsoft’s Cortana don’t ask to see what you’re talking about. Siri should say “Can you show me that?“.

When given the chance to see, the data reveals the number one question people ask a seeing computer about is problems on their body. They worry about rashes etc. It’s interesting as it shows how most companies are working on the wrong problem, learning Starbucks, Mercedes and Nike logos from all angles won’t get you there.


Looking at a simple coffee cup… Algorithms today focus on the Starbucks logo, and respond with offers of Starbucks products like “Starbucks iPhone Case”.  Huh?  At least identify it as “a white Starbucks Ceramic Mug”, why are you just showing me iPhone cases?

The step that all the companies fail at is when you break the mug… Even a kid would say “A broken mug”, but after countless millions of dollars every research tech fails to nail it.


I’ve seen multi-multi-million dollar systems analyze this kind of image above and return the word “Creamy“.  What?

I have friends at Cloudsight.ai that have avoided the typical “buy data sets and crunch them” model as they knew they need cognition (understanding & comprehension) of image concepts.  They have an open API and are being used in numerous 3rd party applications today, processing countless millions of images from real people.

Most companies are working on straight “recognition” and I get it, I’m a programmer and I also love to think that programming can get us there alone, but it can’t.  It’s like the visual researchers are following the old path of audio researchers by trying to recognize individual words, that have no context.

I remember Bill Gates talking about voice recognition once, he explained just how difficult is is to understand “Do you recognize speech?” vs “Did you wreck a nice beach?” Even if a computer gets the words right, getting it to understand the question was a massive problem.  So only when researchers focused on understanding context did things start to leap forward for the cloud being able to listen.

Cloudsight Logo

The reason the CloudSight.ai solution is interesting is because they’ve spent years working on the parental teaching loop that human brains require to grow. The reason the kid can understand the broken mug is because they understand the concept BROKEN, they broke things and saw how they break.  They will see more and more evidence of this idea growing up and can recognize BROKEN in any form.  There’s lots to learn…  “Those glasses are broken“, “Those glasses are old“, “That person looks ill” etc.

Every single major corporation (Apple, Google, Microsoft, Facebook, Pinterest etc.) will need to either have the cloud see or to understand the billions of images and videos they are handling.  It’s a certain future and it’s fun watching the progress.


I just gave it a fun image to try…   It nailed it.

Fish Bike

Here’s someone testing all the top solutions:


So keep an eye on this space, it’s about to get very interesting.

Wow 20 years passed already.  Nice detailed look at humor in games and our old MDK game done by George Ayres Mousinho from The Reactive Code.  I think the theme of the Shiny games were that we always had a new hook (Sniper Mode in MDK, Possession in Messiah, etc.), and we tried to make it hard to predict what was coming next.  What was the next level of Earthworm Jim going to be?

The art that Nick Bruty and Bob Stevenson did for MDK was incredible for it’s time and the 3D coding lead was Andy Astor, who sadly passed away from Cancer.  He was a great loss to the video game industry.

So 20 years ago this was Shiny Entertainment’s first attempt at 3D…


Shockingly (according to Good Old Games) MDK still works in Windows 10!

Photography is my current hobby, I’ve been lucky enough to get training time with some of the world’s best photographers.  (There’s a Canon Experience Center near me.)

So if you are an aspiring model, how do you get a good photographer to take your photo for free?

The answer is “TFP“, it’s a term used which basically means the photographer, stylist, make-up artist & model all work for free so they can use the results in their portfolios.

There are plenty of sites out there like Model Mayhem & MeetUp that will offer countless TFP shoots in your area.  (Your job is to find the best photographers in there.)

What I’ve learned is you just can never take enough shots.  Photographers need to get to a point where they could literally use their camera in the dark, so every adjustment is instant and not distracting to the shoot, and the models need to study what their angles and emotions look like when captured.

I recently shot a very professional model (arranged by Canon), while posing she suddenly ran through a whole set of emotions and boom, then back to normal.  It surprised me at first, but she realized I was shooting high speed, and she knew the best emotions that work with her face, and so it was fun to see someone work that must have done a thousand shoots before.

The point is, just get out there and shoot, then do it again, and again and again.  The pictures will keep getting better for everyone.

Every now and again you’ll meet someone very talented, you’ll form a team and the results make this whole process of Photography really fun.

PS. If you want dramatic shots, don’t be afraid to present your ideas!  It’s a collaboration and the photographer will enjoy the challenge.  (Just collect images up that you love on Pinterest for reference.)  In this photo below, I was challenged by the make-up artist to make a photo that was only red and white.

Original _M5A7842 FN2 - DP EDIT v02 @1080w

So… There’s nothing to stop you becoming a model, just start.

If you’ve ever taken an image with atrocious white balance, and, despite your best efforts at adjustment in Lightroom or Adobe Camera Raw, found it shifting from one color cast to another rather than becoming neutral, today we’ve got a tutorial for f64 Academy to help. Some types of scenes are inherently more challenging to obtain correct…

via How To Fix Bad White Balance | A Simple 3 Step Method — SLR Lounge

Sangita’s Bench

April 29, 2017 — 1 Comment

One of my first “real” woodworking projects.

Continue Reading...

ZX Spectrum Next

April 29, 2017 — Leave a comment

Super excited about this project, it’s a new “enhanced” version of the Spectrum!



FPV Car Racing!

April 28, 2017 — Leave a comment

Perhaps you like the idea of FPV (first-person view) drone racing, but you’re a little uncomfortable with the thought of piloting a rapidly-moving quadcopter through the air. If you are, then maybe FPV car racing would be more to your liking. Although hobbyists have been doing it on a DIY basis for years, there’s a…

via FPV car racing is getting kinda gnarly — New Atlas