Archives For ARTICLES/LINKS

Unreal Engine – Wow.

September 29, 2017 — Leave a comment

Unreal Engine continues to take the step towards “photoreal”, it’s an important step as after photoreal exists, the game industry can’t keep repeating the past.

Once rocks are indistinguishable from rocks, we’ll want to see something new.  I can’t wait to see what designers come up with!

 

 

I’m SUPER EXCITED about the new Arduboy! This I think is the best way to learn to program computers, to start by modifying games and then to make your own. It’s way more fun than starting with “Hello World” like every long-winded book offers. As a freebie, you also get to learn about controlling electronics in the process. If you have a kid, get them one of these and spend some time playing with it, there are a TON of free games online to start with.

https://www.kickstarter.com/projects/903888394/arduventure-on-arduboy-8-bit-rpg-for-your-wallet?ref=user_menu

Well there you go…  Two of the top CEO’s the in video game industry talking about streaming games…  Today it’s Yves from Ubisoft.

“Streaming will totally change the way we create and play games, and will again positively disrupt how we think about gaming.”

Find the interview over on GameIndustry.biz here:

http://www.gamesindustry.biz/articles/2017-06-14-ubisoft-ceo-streaming-will-be-the-next-big-thing

The 6th Annual IEEE GameSIG Intercollegiate Computer Games Showcase
is providing student game developers the chance to present their best
student developed video games for judging by an elite panel of video
game industry leaders.

Come out in support of your favorite colleges as student development
teams go head-to-head for school pride, bragging rights, and this year’s
GameSIG Cup. Finalists will demonstrate their games for a growing
number of students, alumni, and sponsors representing Orange County’s
internationally renowned video game industry.

This year’s showcase will be held at California State University,
Fullerton, CA at the Titan Student Union pavilions A,B & C. Free Parking in the
State College parking structure (SCPS) adjacent to the Titan Student
Union. Park in # 36, and walk to # 31 on this Campus Map at:
http://www.fullerton.edu/campusmap/

Following the showcase enjoy a reception featuring local gourmet food
trucks, game demonstrations by the student developers, and networking.
Put on your game face and come to cheer your favorite school, watch
demos and play games. Click here to RSVP for Free tickets.

The public is invited to attend the annual intercollegiate competition
among the major universities of Orange County and the surrounding area.

* 12:30PM – 4:30PMCompetition Among The Chosen (Finalists)
The greatest student game developers in the SoCal Empire’ universities
will compete for honors on the Big Screen before our panel of Local
Game Industry Leader judges in an American Idol format. Each Student
Game Developer Team will have ten minutes to shock and awe the judges.
* 4:30PM – 6:30PMReception & Game Demos (food & play)
We will have a casual game demo session, networking, and reception.
The audience (you) will have the opportunity to get some hands-on time
with the games created by the finalists and semi-finalists, enjoy
refreshments, and network with industry professionals.

Warning: Seating is limited. RSVP for Free tickets here:
https://www.eventbrite.com/e/sixth-annual-ieee-gamesig-intercollegiate-computer-game-showcase-tickets-30183138580

I’ve been watching the evolution of computer vision closely and things are getting VERY interesting right now.

We’ve always wondered when computers can think like humans, and it’s always remained that elusive “20 years away”. To really interface with us, they need some KEY things:

#1 TO REMEMBER (STORAGE & RECALL)
#2 TO LISTEN & COMPREHEND (MICROPHONES & COMPUTE)
#3 TO SPEAK (SPEECH GENERATION)
#4 TO SEE
#5 TO THINK (COGNITION, EMOTION & CREATIVITY)

Of those 5 things, the first three are pretty much nailed, #4 is next.

The evidence that #4 is elusive is Siri on the iPhone, or Microsoft’s Cortana don’t ask to see what you’re talking about. Siri should say “Can you show me that?“.

When given the chance to see, the data reveals the number one question people ask a seeing computer about is problems on their body. They worry about rashes etc. It’s interesting as it shows how most companies are working on the wrong problem, learning Starbucks, Mercedes and Nike logos from all angles won’t get you there.

Starbucks

Looking at a simple coffee cup… Algorithms today focus on the Starbucks logo, and respond with offers of Starbucks products like “Starbucks iPhone Case”.  Huh?  At least identify it as “a white Starbucks Ceramic Mug”, why are you just showing me iPhone cases?

The step that all the companies fail at is when you break the mug… Even a kid would say “A broken mug”, but after countless millions of dollars every research tech fails to nail it.

coffee-16787794

I’ve seen multi-multi-million dollar systems analyze this kind of image above and return the word “Creamy“.  What?

I have friends at Cloudsight.ai that have avoided the typical “buy data sets and crunch them” model as they knew they need cognition (understanding & comprehension) of image concepts.  They have an open API and are being used in numerous 3rd party applications today, processing countless millions of images from real people.

Most companies are working on straight “recognition” and I get it, I’m a programmer and I also love to think that programming can get us there alone, but it can’t.  It’s like the visual researchers are following the old path of audio researchers by trying to recognize individual words, that have no context.

I remember Bill Gates talking about voice recognition once, he explained just how difficult is is to understand “Do you recognize speech?” vs “Did you wreck a nice beach?” Even if a computer gets the words right, getting it to understand the question was a massive problem.  So only when researchers focused on understanding context did things start to leap forward for the cloud being able to listen.

Cloudsight Logo

The reason the CloudSight.ai solution is interesting is because they’ve spent years working on the parental teaching loop that human brains require to grow. The reason the kid can understand the broken mug is because they understand the concept BROKEN, they broke things and saw how they break.  They will see more and more evidence of this idea growing up and can recognize BROKEN in any form.  There’s lots to learn…  “Those glasses are broken“, “Those glasses are old“, “That person looks ill” etc.

Every single major corporation (Apple, Google, Microsoft, Facebook, Pinterest etc.) will need to either have the cloud see or to understand the billions of images and videos they are handling.  It’s a certain future and it’s fun watching the progress.

http://cloudsight.ai/api

I just gave it a fun image to try…   It nailed it.

Fish Bike

Here’s someone testing all the top solutions:

http://www.business2community.com/brandviews/upwork/comparing-image-recognition-apis-01836977#7xoZcux6ybpe9FHM.97

So keep an eye on this space, it’s about to get very interesting.

If you’ve ever taken an image with atrocious white balance, and, despite your best efforts at adjustment in Lightroom or Adobe Camera Raw, found it shifting from one color cast to another rather than becoming neutral, today we’ve got a tutorial for f64 Academy to help. Some types of scenes are inherently more challenging to obtain correct…

via How To Fix Bad White Balance | A Simple 3 Step Method — SLR Lounge

FPV Car Racing!

April 28, 2017 — Leave a comment

Perhaps you like the idea of FPV (first-person view) drone racing, but you’re a little uncomfortable with the thought of piloting a rapidly-moving quadcopter through the air. If you are, then maybe FPV car racing would be more to your liking. Although hobbyists have been doing it on a DIY basis for years, there’s a…

via FPV car racing is getting kinda gnarly — New Atlas