14 Matching Annotations
  1. Jun 2025
    1. I found the “And, But, Therefore” (ABT) storytelling model incredibly helpful in thinking about how to structure clear, persuasive communication. I agree with the author’s point that most scientific or technical presentations suffer from being a string of facts connected only with “and,” which makes them hard to follow or care about. The ABT method offers a simple yet powerful shift—it forces you to frame a problem and then resolve it, which is much more engaging.

      This reading changed my perspective on how I approach writing and presenting technical information. I often focus so much on being accurate that I forget to create narrative flow. I now see how starting with context (And), introducing tension (But), and offering a resolution (Therefore) can make even complex topics more digestible and memorable. I’ll definitely try this format in future presentations or pitch decks.

  2. May 2025
    1. In his classic text Design for the Real World, Victor Papanek positions design as a universal practice in human communities:

      I completely agree with the author’s critique of the myth of the “universal user.” In tech, we often talk about designing for “everyone,” but in reality, it usually centers a narrow set of needs—often white, able-bodied, middle-class users. This reading made me reflect on how even the tools I’ve built or worked on may unintentionally exclude others, and it makes me want to be more intentional in including marginalized voices earlier in the process.

      I also appreciated how the reading offered hope—it doesn’t just critique, but also calls for action through community-led design and alternative frameworks. It made me feel like justice-oriented design is possible if we’re willing to do the work. I’m excited to explore how I can apply these principles in my future projects, especially in community-focused or accessibility-driven work.

    1. Here, we’ll discuss the ubiquitous medium of screen-based user interface design for digital computers, since that’s currently the dominant medium in society (this includes desktops, laptops, tablets, smartphones and even smart watches, but not augmented reality, virtual reality, or other non-screen interactions). Let’s discuss some of the core concepts in screen-based human-computer interaction33 Myers, B., Hudson, S. E., & Pausch, R. (2000). Past, present, and future of user interface software tools. ACM Transactions on Computer-Human Interaction (TOCHI).  and some of the paradigms that exist in this medium.

      Chapter 7 on interfaces really opened my eyes to how much thought goes into something we interact with every day without thinking—interfaces! I liked how Ko broke down the different ways interfaces can succeed or fail, especially how small design choices can make or break usability. I definitely agree with the idea that interfaces aren’t just about aesthetics but also about how clearly they communicate what a system does and how to use it. Reading this made me reflect on apps I use that frustrate me, and now I realize those frustrations are often interface problems, not user problems. It made me want to be more intentional about designing interfaces that are intuitive and forgiving.

    1. Building things takes a long time and is very expensive, and usually much more than anyone thinks. Don’t spend 6 months engineering something that isn’t useful.Once you have built

      I really enjoyed reading Chapter 6 about prototyping because it emphasized how prototypes don’t have to be perfect—they’re just tools to explore ideas and communicate designs. I’ve always felt pressure to make prototypes look “finished,” so this reading challenged my perfectionist mindset and reminded me that rough sketches or partial models can be just as valuable. I agree with Ko’s point that prototypes are great for quickly testing assumptions because I’ve seen how much time you can waste if you skip prototyping and jump straight into building the real thing. This chapter made me appreciate prototyping as a creative and iterative process rather than just a step toward the final product.

    1. hus far, we’ve discussed two ways of evaluating designs. Critique collaboratively leverages human judgement and domain expertise and empiricism attempts to observe how well a design works with people trying to actually use your design. The third and last paradigm we’ll discuss is analytical. Methods in this paradigm try to simulate people using a design and then use design principles and expert judgement to predict likely problems.There are many of these methods. Here are just a sample:Heuristic evaluation55 Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. ACM SIGCHI Conference on Human Factors in Computing (CHI).  is a collection of user interface d

      I appreciated how Ko contrasts field studies with lab experiments—it helped me realize how important context is when studying user behavior. I completely agree with the point that users often behave differently in natural environments compared to artificial settings. This really resonated with me, especially since I’ve seen classmates run usability tests in ways that don't reflect actual usage patterns. I also liked how this chapter emphasized observing over intervening. It reminded me that not every study has to “fix” something right away; sometimes just understanding the problem is powerful on its own.

    1. Critique leverages intuition, expertise, and judgement. These are powerful, invaluable sources of feedback, but they do have limitations. Most notably, they can be subjective and sparse. What if instead of expert speculation about whether a design will work (which is prone to “blind” spots55 Nathan, M. J., & Petrosino, A. (2003). Expert blind spot among preservice teachers. American Educational Research Journal. , such as masking hard problems that experts view as easy), you want to actually observe whether it will work?

      I found Chapter 9 really useful in clarifying how controlled experiments are structured and why they’re so valued in empirical research. The example of comparing two interfaces for usability testing stood out to me—it made the whole concept feel more grounded. I agree with Ko’s emphasis on randomization and control because it’s something I’ve often seen overlooked in student projects. That said, I think the idea of “causal inference” still feels a bit intimidating. While the reading made it clearer, I’d love to see more examples of how small design decisions can influence outcomes in real-world user studies.

  3. Apr 2025
    1. There’s no need to reinvent the wheel. Learn from what has been tried and is currently in use, map it out in a competitive analysis, and leverage your findings to differentiate your solution from the competition. And if you are new to a particular vertical, i.e. financial technology, then a competitive analysis will be imperative to grow your understanding of the basic features and functions of a financial technology platform. Understanding the landscape of competitors not only helps inform your design decisions but it also helps inform the overall product strategy. A UX competitive analysis uncovers valuable opportunities to create a superior product and stand out from the competition.

      The Medium article on competitive analysis really shifted how I think about looking at competitors. I used to see competitive analysis as just "finding flaws" in other apps, but this reading emphasized learning what works, too — and identifying gaps where you can differentiate. I completely agree with the idea that competitive analysis isn’t about copying features, it’s about making smarter design decisions based on what users already expect or are missing. It made me realize that a great product often succeeds not just because it's new, but because it better fits user needs in ways others haven’t yet. I’m excited to use these strategies for my own projects!

    1. There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

      I found the Pew Research guide on writing survey questions very eye-opening. I used to think that writing a survey was just about thinking of interesting questions, but this reading showed me how much thought needs to go into wording and structure to avoid bias. I especially agreed with the point about how small wording differences can completely change a respondent’s answer — like asking "Do you support government spending?" versus "Do you support investing in communities?" It made me realize that surveys are just as much about design as they are about data collection. Going forward, I’ll be much more careful when designing surveys for projects because I now see how easily they can mislead without meaning to.

    1. I thought Chapter 8 on critique was really insightful, especially the idea that good critique is a collaboration, not just criticism. I agree with Ko’s point that the goal of critique is to improve a design, not to prove someone wrong — that distinction really changed how I think about giving and receiving feedback. Before, I used to feel defensive when someone critiqued my work, but now I see it more like having a second brain helping me improve my ideas. I also liked the advice about structuring critiques around questions instead of just offering opinions — it feels more respectful and actually leads to more productive discussions.

    1. Other creativity strategies are more analytical. For example, if you want to think of something new, question assumptions. Einstein asked whether time is really uniform and absolute in space. That’s a pretty disruptive idea. Even questioning smaller assumptions can have big design implications. Consider several of the assumptions that recent software companies questioned:

      I really enjoyed reading Chapter 5 on creativity. I appreciated Amy J. Ko’s perspective that creativity isn't just a magical talent you’re born with, but a skill you can nurture through processes like questioning assumptions, combining ideas, and persisting through failure. I especially agreed with the idea that creative work requires time and persistence — it’s easy to romanticize creativity, but the reality is that it often looks like hard work and small improvements over time. This chapter reminded me that creativity isn’t passive; it’s an active practice, and that mindset shift feels empowering for my future projects.

    1. divergent

      I agree with Ko’s point that teams often jump into convergent thinking too quickly, skipping the creative benefits of divergent thinking. In my own group projects, I’ve noticed how quickly we settle on the first “good enough” idea, instead of pushing ourselves to explore more unique or risky options. Ko’s emphasis on structured brainstorming techniques reminded me that creativity isn’t just about sudden inspiration, it’s something we can actively cultivate through practice.

      However, I disagree slightly with the idea that constraints always encourage creativity. While I understand that constraints can spark innovative thinking by forcing us to work within limits, I’ve also felt overwhelmed by certain constraints that made ideation feel more stressful than generative. I think the impact of constraints depends on how they’re framed and how much space the team has to experiment within them.

      This reading helped me reflect on how I approach ideation and how I can improve that process by being more intentional. I’ll try to apply more divergent thinking strategies in future projects to avoid creative shortcuts.

    2. n modern design education (found primarily in schools of design and art) we see another form of design process that some have called “designerly ways of knowing55 Cross, N. (1982). Designerly ways of knowing. Design Studies. . Here, the idea is that trained designers arrive at knowledge through synthesis—forming coherent systems of ideas from disparate parts—whereas other kinds of thinking involve analysis—taking a coherent system and deconstructing it, as scientists do with nature. Synthesis is similar to divergent thinking in that they both focus on new possibilities; analysis and convergent thinking are similar in that they both reduce possibilities.

      Ko dives into the iterative process of design, and the importance of prototyping and testing. I found this chapter particularly insightful because it challenged my earlier assumptions about the “perfect” design. I’ve always been someone who aims to get things right on the first try, so the idea that failure is an essential part of the process was eye-opening. Ko’s argument that prototyping is about learning, not perfection, was something I found very valuable. I now see prototypes not as a “draft” but as an opportunity to fail early, learn quickly, and refine my ideas based on real-world feedback. I completely agree with Ko’s view that embracing iteration leads to better designs in the long run, and I plan to incorporate more frequent prototyping into my own design process. This reading has definitely changed my perspective on how I approach problem-solving It’s no longer about getting things right immediately, but about continuously improving over time.

    1. One simple form of knowledge is to derive goals and values from your data. What are people trying to achieve? For example, let’s say you did a bunch of interviews about trying to find a place to rent in Seattle. One person talked about trying to afford rent, another person talked about trying to save time by finding the right location, another person had a physical disability that made the layout of the house important. You need to extract these goals and represent them explicitly and try to understand what they are. Different designs may serve different goals, and so understanding the space of goals that you might design for is critical.

      This chapter really challenged my assumption that data analysis in design is clean and straightforward. I used to think once you had the data, you just organized it and got your answers—but Ko points out how much subjectivity is involved. I agree that making sense of research is more about interpretation and creativity than following a strict formula, which honestly feels both freeing and intimidating. I found myself questioning how often we unintentionally force patterns that aren’t really there, just to feel like we’re making progress. That really stuck with me, and now I’m thinking about how I can be more intentional about reflecting on my biases when analyzing user data.

    1. nterviews are flawed and limited

      I really like how this chapter called out the limitations of just asking users what they want. I’ve always thought surveys or direct questions were the best way to understand people’s needs, but now I see how easily that can lead to surface-level or even misleading insights. I 100% agree with Ko’s emphasis on observation and deeper listening—people don’t always have the language or awareness to express what’s really bothering them. This chapter made me reflect on how I’ve sometimes made assumptions based on my own perspective rather than taking the time to understand users' actual behavior. It’s a reminder that empathy in design takes more effort and intentionality than I used to think.