notes on human-ai interaction

Jan 19, 2025

generated by NovelAI

overview

Collection of my various revelations as I study human-AI interaction. My goal is to answer the questions: "How can we design AI interaction to enhance human capability?" I'm convinced machine learning can positively alter human experience. It isn't just something to mathematically optimize — which seems too brutally logical.

The way Josh Lovejoy (Principal UX @ Google PAIR Lab) puts it:

The role of AI shouldn’t be to find the needle in the haystack for us, but to show us how much hay it can clear so we can better see the needle ourselves.

human ai collaboration

Machine Learning is a black box. Understanding how a user interacts with an unbounded system is more ambiguous. Here are some notes of me trying to dissect and propose various solutions to this problem:

  • Experiential framework: Familiarity is king. Attach existing mental models. User feels in control and owns the system, not the other way around.

on designing ai

  • three models: Capability (role with the human), Accuracy (representing the human), Learnability (evolving with the human)
  • human frailty as an essential input in technical architecture instead of treating ai (pure automation) and humans (fallback option) as separate paradigms.
  • develop systems that help humans thrive in novelty and creation. the system is best at repetition and intense focus. i.e. support what makes human experience deeply satisfying and leave the grunt work to ai.
  • humans as a probabilistic system.
  • human ai collaboration is founded on shared intuition. predictions on the future can operate off concrete facts of the past OR ambiguous feelings about the future. a system that only operates off the former never evolves organically.
  • pillars of trust: predictability (expectation) and dependability (consistency). essentially, consistently show what the person expects to see. this is why we gravitate toward people similar to us, because we can expect and consistently predict their response.
  • if ai does not align to known mental models, this usually means there is a mismatch between training samples with human conditions. call it overfitting or whatnot.
  • "AI’s foundational affordance is that it can form intuition": humans and ai should learn from each other, instead of a one-way road.
  • semantic ladder signifiers: breaking down outputs to its most fundamental building blocks. Similar to how an ai interprets information and builds confidence in the system on a conceptual level. adjacent to interpretability.

ml interpretibility

Interpretability is the most important problem in interaction. How does it think? What are the inputs and outputs? How does it make decisions? Traditionally, this was all stored neatly in a black box. Here's some insights I've gathered on explainable ai:

  • Tilde Research is building interpreter models at every step of the TRAINING process instead of after. i.e. in a sense like "I love playing violin", you can ascribe the ai's thoughts to each word
  • Sparse Encoders extract the most semantically relevant features from a huge amount of data and storing it in a vector space.

human perception

perceptual models that enhance human trust in the system. Includes things like risk perception, decision-making frameworks, pattern recognition, and contextual adaptation.

misc & analogy

On the surface, HAII seems abstract. Progress is not defined with the same finite achievements like launching a rocket, and thus requires a new definition. Here are some analogies to help understand the problem:

  • Frankenstein situation: When the creator, obsessed with scientific advancement, failed to consider how to interact with his creation.
  • Computational orientation: Bias for technological progress (ex: inference time, cost function optimization, etc.) ignores the human experience.
    • people that can reason beyond pure logic often study humanities and social sciences instead of pure engineering.