Stay tuned for the development of a research project during this coming year that is funded by the Stimulerings Fonds and in collaboration with scientists around the world working with machine learning and computational geometry.
There have recently been impressive advances in training deep neural networks that have led to significant advances on tasks such as automatic recognition of images and speech. As humans, we perceive our physical world in very particular ways, mainly through our senses, and we are able to assign meaning to objects and images. However, how would this new artificial consciousness, that comes with the widespread adoption of AI, perceive and experience the world? How would our designed world change to accommodate this new consciousness? What if machines were capable of “designing” 3d objects by using archives of online images to create an approximation based on different 2d views of a certain thing? Would that even be considered as design and how closely would it resemble the original object?
I start from a speculative scenario where AI systems have been already incorporated into every single aspect of our daily lives and autonomous, semi-autonomous and tele-operated robots are widely accepted.
Build the scenario around these two groups. The first ones are the techno-optimists and the second one, the resistance, the techno-pessimists.
Trying not only to design environments that are inclusive to robots
- Pattern, edges and color give autonomous robots grip. Maximize robot perception through the appropriate selection of colors, textures, font/pattern sizes and materials for surfaces and finishes.
- Design signs based on robot sensing capacities.
-Maximize sensory signal strength and contrast (light, color, sound etc)
- Minimize environmental noise that would interfere with the robot’s sensors.
- Avoid shiny, mirror-like and smooth surfaces.
- By trying to accommodate and design for robotic perception they have sacrificed parts of their own human ways of sensing the world. They live in a world that is not exactly inclusive for robots but designed specifically for them. They have outsourced their perceptive mechanisms to systems and are living in spaces that they cannot really decode themselves. E.g. Digital clock for robots, …..x for robots……y for robots….. How do humans navigate these environments that were not designed for them?
- Computer vision systems are not good at identifying small details and understanding from those details what an object might be. Unlike humans, who can decipher what a thing is even from a small blurred detail of it.
Identifying objects rely on memory and experience not mere vision.
- ADVERSARIAL examples: Use the patterns and the distortion noise in misleading ways.