Here you will find all the necessary resources for the computer vision input.
This is a general presentation about computer vision. The main questions that the video should answer are, what computer vision, where it is used and how it is related to interaction design in general?
Watch the presentation and write down questions and thoughts that come to your mind! We can discuss them later in the Zoom conference.
Let’s talk together about computer vision in general and discuss questions and annotations.
https://zhdk.zoom.us/j/4710337215
What do you expect of today?
How was it to watch the presentation?
Specific ideas for your group project?
How is the input structured?
Use Slack / Zoom for help
Do your own first steps with processing and computer vision by creating a paint tool with your own interaction method.
Create a new processing sketch which loads this image and displays it. How can we find the position of the moon?
Install the Video Library 2.0 and display the webcam image!
Combine brightest point tracking with the webcam capture to track the brightest point in your webcam video.
Create an own drawing software that is controlled by the light of your smartphone flashlight.
Think about additional features you could implement!
Discuss the morning & share what everyone has created.
https://zhdk.zoom.us/j/4710337215
Technical Input
Was it possible to follow?
Getting deeper into computer vision by using OpenCV, a framework to analyse images with traditional computer vision algorithms.
Use opencv to track the brightest spot.
opencv.blur(10);
PVector location = opencv.max();
Write your own sketch that is able to detect motion! Use either background subtraction, image difference or your own algorithm.
How will you notify yourself?
Use Contour Detection to implement a multi-user drawing system.
Discuss the current state and mood.
https://zhdk.zoom.us/j/4710337215
Watch the video about depth sensing and on how machines are able to perceive our world.
Questions, notes and comments.
This input is about computer vision combined with modern machine learning techniques. It is also an introduction about the topic of machine learning in general. How does a machine learn? How does it see?
Download the Deep Vision Library (direct download link) for Processing and install it into your library folder. Try out the examples and see how the library works.
Use the FaceDetector Webcam Example and try to combine it with the emotion classifier. To create an emotion detector, use the following code sample:
FERPlusEmotionNetwork emotionNetwork = vision.createFERPlusEmotionClassifier(); emotionNetwork.setup(); // run on a single face emotionNetwork.run(face) // run on multiple detected images emotions = emotionNetwork.runByDetections(testImage, detections);
A Java example on how to combine these two methods you can find here, it is Processing as well, but you may have to adapt it slightly.
Furthermore: Have a look at the Gender & Age prediction example and try to add it to the sketch as well.
Try to analyze your own set of images and think about what you could use this information for. Use the example provided here.
Try out the mouse example to train your own classifier.
What else could you classify?
Where do you get the data from?
How can you include this into your main project?
Please do not share the slides and the content from this page. The images and videos in the presentations are sourced but not licensed.