Tilman

Now

🤔 What am I up to at the moment?

Teaching Digital Experience Design at Nuremberg Tech. The current curriculum spans web design, app design, creative coding (p5.js), game design (PICO-8, Unreal Engine) and more experimental digital design fields (XR, Physical Computing, AI) Finding new ways to combine face-to-face and digital collaboration tools for learning and discussing design. (Confluence, Etherpad, Miro, Google Forms, Adobe XD, Figma)

Researching how people express themselves on the web. Researching and developing ideas how the web can become an alternative to the sinking ad-driven social media networks.

Playing Super Mario Bros. Wonder

Reading Mr. Penumbra's 24-Hour Bookstore by Robin Sloan

Listening Giant Rooks (also, LIVE! 🤩)

Brent Simmons, the author of NetNewsWire (my feed reader for many years), writes about Mastodon support in his application. The interesting part is: How can Mastodon (or any other ActivityPub source) be integrated into traditional "feed reading" apps? There are some significant differences. I chewed on this a lot last year while tweaking the microsub parts of Knot. No substantial solutions, so far, but this might be where possible futures are negotiated.

inessential.com/2023/12/17/on_mastodon_support_in_netnewswire*

The Expanding Dark Forest and Generative AI / Maggie Appleton

In this 45 minute talk Maggie Appleton explains the "dark forest theory of the web": How ads, trackers, clickbait, and predatory behaviours turn the public web into a hostile place for humans. She sketches how generative AI will accelerate this even more, what possible futures this might bring and even gives some advice how to work with AI reasonably. So many insights and great thinking, well worth the time:

www.youtube.com/watch?v=QPoM-h1fK8M*

Are mobile AI devices our future?

The Rabbit R1 (left) was just revealed at CES in Las Vegas: a mobile AI companion, smaller than the usual smartphone and primarily optimized for voice interaction. It resembles Humane’s AI pin (right), which was introduced to the public late last year. Are these two gadgets the start of a new class of devices that we will carry around in five years? I don’t think so, but there are some interesting design decisions nevertheless, so the introductory videos are well worth watching: Rabbit Tech Keynote and Humane AI Pin. Let’s have a closer look:

Beyond the marketing and shiny interface, both devices are more or less smartphones with a shrunken screen. The most important shift in interacting with these devices is the absence of separate "apps," as we've become accustomed to in the last decades. You don't have to switch between operation modes or contexts; you just talk to the device and/or show something to the camera, and the "artificially intelligent assistant" figures out how to help you. In most cases, that leads to using the services we're already accustomed to: streaming music, texting, searching the web, taking photos, or videos.

I think the biggest selling point for these devices is convenience. To use them, you don’t have to pull out your phone or even turn your head. You just activate it and talk. It’s magical when it works, but considering my experiences with AI assistants so far, will surely not always be the case. If the results are not satisfying, you will likely end up looking at your smartphone screen again.

The most convenient and fastest way to browse through lots of content is still on a screen. Zapping through documents, picking from playlists, jumping through messages – these basic functions work much better visually than through spoken word. For the time being, I don’t think AI will understand our intentions so well that they can make decisions for us and screens will become obsolete.

Maybe the assistants we already have will win the race for our attention? Alexa, Siri, Cortana, Google are already integrated into our computers, smartphones, and tablets. As additional methods of interaction, they are just not as prominently displayed. And they are much less "intelligent" than the current wave of chat AI. When they catch up, perhaps we won’t need new devices?

I see two major obstacles in the future of AI assistants in general:

First: privacy.

To serve us effectively, AI assistants will need to understand us in detail. What do we mean? What are our intentions? How do our complex lives function? To grasp that, they need access to our data: our communication, preferences, actions, behavior. They have to observe us very closely, all the time. I would not feel comfortable sharing my private data with any digital service to such an extent.

Second: power.

If we start to rely seriously on AI assistants, they will wield a lot of power over us, our decisions, choices, and actions. They will become the central intermediary in nearly every part of our digital lives. How can we ensure this power is not misused?

I feel there is a lot to negotiate before AI assistants can be truly useful. And it's not so much about the technology.

Adam Stoddard’s website surprised me with random background graphics completing the clean typography: aaadaaam.com* The color theme picker is also a nice touch.

Information

https://tilman.me