Tesla’s Director of Artificial Intelligence (AI), Andrej Karpathy, shared a thread this week about a new project that his team is working on. He also shared some video footage from it. The thread, in a nutshell, is an invitation for those interested in helping Tesla solve this particular problem. They want you to apply for a job.
Karpathy noted that the videos were panoptic segmentation from the new project and were too raw to run in the car. So, instead, they are feeding it into auto labelers.
In AI, panoptic segmentation is the task of clustering parts of an image together that belong to the same object class. Another term for this is pixel-level classification. It partitions images or video frames into multiple segments or objects. An auto labeler simply labels raw, unlabeled data. Labeling helps an AI understand the implications of a pattern. PatternEx has an in-depth article about this term and uses the Pandora app as an example. As you hear a song and click the thumbs up, you are basically telling it that you like the song. This is labeling. Another example is a parent reading a book to a baby, tapping an image of a dog, and saying the word “dog.”
So, explaining Andrej’s tweet in layman’s terms, he is simply saying that the videos are from a new project where they are clustering parts of an image/video that belong to the same object class and then getting it labeled via AI. After that, the aim is to improve the system more and more so that the AI gets better and better at seeing the world in a complete, comprehensive, human (but actually superhuman) way.
Karpathy pointed out that it is still early for this task and that Tesla needs help perfecting these panoptic segmentation predictions and realizing the downstream impact.
Who’s gonna help Tesla? Anyone who wants to can apply to join the team here.