During summer 2025 I followed an online course from the University of Göteborg (Sweden) focused on the relation in between game and visual art given by Ewa Einhorn. Scanyourcan.com was created in this context.
The unanimous promotion of competition has crept into the most intimate aspects of our lives. The struggle for recognition even contaminates the pursuit of well-being. Under the guise of personal development, which ultimately aims to make us more efficient at work, we are offered a hypothetical ideal self that is always within reach. If only we make the right choices, if only we tick the right boxes. Any socio-economic contextualisation becomes superfluous. We relentlessly feed the algorithms that will then calibrate the ideal product. But what if it were us who were calibrated to better fit production?
scanyourcan.com presents itself as a meditation app, which leads to the hypothetical purchase of a can in a supermarket, and this can is supposed to represent a potential improved self.
Guided meditations are in themselves “scores” that you follow to reach a certain mental state. I’m fascinated by the injunctions of personal development and the links that these practices have with capitalism. The underlying idea is often to be more productive, better able to cope with stress, more focused… in order to work better.
I was also inspired by the use of AI in everyday life. I’ve had many conversations with my friends about their use of Chatgpt. Many people use it as a shrink, or even as a relationship counsellor. Others have fun trolling it.
The text is peppered with absurdities and critical anti-consumerist winks, oscillating between naivety, mawkishness and sarcasm. There are also references to the world of the Internet: pornographic consumption, trolls, Hikikomori, video games, BDSM role-playing games.
I also really enjoyed playing with words and their sound similarities, such as: self, shelf, shell, s-hell-f. Can (to be able to) and can – the can. Cart and card. This also refers to potential misunderstandings that AIs develop by not fully understanding human prompts or situations.













