⏱ Estimated reading time: 8 minutes

Table of Contents

How to get unwilling users to use a mobile app?

By Cloé Brissaud, Technical consultant @ ekino

See also: L’utilisateurice forcée, qui n’aime pas l’informatique

About 15% of French people are not comfortable using electronic devices. They are left aside during the design and development of new websites and apps, their needs not being taken care of, driving them to feel less and less comfortable over time with technology. Let’s explore some ideas that can help include innumeric people.

Ways to help

Avoid invisible UI elements

When it comes to User Interface, being intuitive really depends on the target audience. Intuitiveness can be a factor of experience: “I have seen those interface elements before, therefore I know how I am expected to interact with them”. Everyone does not have prior experience with user interfaces and for those users, some concepts such as “cards” are not familiar to them. Some UI aspects are learned through experience, by recognizing them across apps like clickable cards, unlabeled navigation bars, swiping items from lists and so on.

Invisible elements increase cognitive load as their interactions must be memorized and can be the cause of mishaps. When a user feels the interface does not react how they intended it to, it can cause them uncertainty, doubt, loss of self-trust and the ability to learn. Label your interactions and avoid invisible actions.

Text-to-speech, speech-to-text

Some keyboards natively let users input text from the user’s voice. This can avoid lengthy and complex interactions with the device and lets some users type faster. Voice commands can also be useful in situations where the user cannot physically reach the device, such as when they are driving —though you should never use your phone while driving— or when they are wearing gloves. This is an enabler for multitasking, thanks to the reduction in cognitive load from not having to focus on the device.

Voice commands and speech recognition can be implemented in iOS with the Speech Framework and in Android with the SpeechRecognizer API.

Element size and positioning

Fitts’s law dictates that the time for a user to click on a button depends on their distance to the button and its size. The distance factor is less important when users tap on their targets rather than click, yet the size of the button remains the biggest determining factor. Similarly, the device form factor, the size of the user’s finger and the performance of the touchscreen digitizer can deteriorate the user experience when UI elements are not adapted.

Make UI elements distinguishable from one another and place them where users can interact with one hand only. If two buttons are tied to two different actions, separate them to prevent mistakes.

Conclusion

There are many ways to help users use mobile apps: avoid irreversible actions, alert when they happen through modals and use an entry level phone to experience your application in harsh performance conditions.

JavaScript in the browser, how does it work?

By Maxime Dubourg, Frontend Engineer @ ekino

Web browsers are among the most complex and optimized programs there is. They reach into the billions of lines of code to support Web standards, network features, tab management, integrations, extensions and so on. In the middle of this, how is it possible for browsers to execute scripts? How do they run JavaScript? Well mostly on one single thread.

First off, JavaScript runs in a scripting engine. Much like a layout engine is responsible for computing and rendering web pages on the screen, a scripting engine runs pieces of code securely to implement interactions. The most famous is V8 and it runs in Google Chrome, most of its forks and Node.JS.

Scripting engines have come to agree on most of their components:

  • A call stack, which stores the stack of running functions;
  • A heap, where variables are stored;
  • A callback queue where asynchronous functions such as setTimeout and fetch can add tasks to run whenever the asynchronous invocation completes. Note that setTimeout does not guarantee that the callback will execute immediately after $timeout milliseconds. However, it guarantees that it will execute after $timeout milliseconds;
  • The event loop, which is in charge of checking if the call stack is empty and fetching tasks from the callback queue when the call stack is empty.

This means that running asynchronous tasks and acting on their results requires the browser to complete is running tasks. This also applies to rendering and user interaction: they are all orchestrated from one shared thread!

To render a page, browsers use their rendering engine, which is in charge of understanding the Document Object Model, CSS and computing how the code should be rendered about 60 times per second. When JavaScript is running in the same thread of a UI heavy website, this can make things lag and leave a bad impression to your users. This is why it is crucial to avoid blocking the main thread! APIs can help scheduling tasks just before the rendering requestAnimationFrame or when the browser starts idling requestIdleCallback.

Technically, each tab does not exactly run in one thread: every navigation context has a main thread. To defer work to a different thread, use Web Workers. Finally, browsers don’t exactly have one callback queue: they schedule work based on the nature of the tasks: Promises, networking, layout, etc.

mos(AI)c

By Laurent Chastrusse, Lead Service Designer @ ekino
By Olivier Allevi, Creative Director

In just a few years, Generative AI model sizes exploded, and their performance are making them useful. Language is not a human trait anymore, since LLMs can generate near perfect sentences. Voice has an edge over text: it creates emotional connections with the user, the consequences of which have been studies in art for decades such as in Brave New World, Transmetropolitan and Her.

Voice interactions with what we perceive today from our assistants and personalized generative AIs has consequences, of which:

  • An increase cognitive load: voice captures the user focus, which interrupts their task in favor of their assistant;
  • Alert fatigue: when too many interactions bring little to no value to the user;
  • Low interoperability between AIs and between AIs and services;
  • Less creative outputs due to being confined to a bubble of personalized results. The user profile has been analyzed and filed, they depend on the tool and lose some capacity to search for themselves;
  • A change in the perception of the world and the self;
  • A loss in the ability to analyze and take a step back.

Hope is not lost, as there are ways to cope with these issues:

  • Better cooperation between AI agents and services. This would improve the overall quality of the interactions;
    • This also means that some system must orchestrate the others. Who should be responsible?
  • Ethical and legal framework when it comes to data sharing;
    • Personal assistants, up to which point?
  • Educate, cultivate critical spirit and intellectual effort.
    • Avoid passive media consumption, especially because people tend to give more credit to what they hear from other people.

These course-corrective actions would also come with consequences:

  • Sensible usage of AI systems: does the user need AI? Not every time! We must stop shoving this need down our users throats!
  • Questioning customer obsession. Our issues are not unique, we’re all in the same boat and can solve them collectively.
  • Rethinking attention economy and emotions in contents. We must create a model based on value, transparency and authenticity.