December 4, 2018

Opening Sketch files in Windows. No credit card required.

Recently my LinkedIn network mentioned a very nice library of human avatars: While this is an excellent tool and I was playing with this idea myself in the past, an average instructional designer, particularly in corporate, does not usually work on a Mac. And the files that are included in the Humaaans library can be opened in Sketch (a Mac-only program) or InVision Studio (which is also currently available only for Mac, although a Windows version will be coming sometime in the future).

So, what can an average Windows-PC wielding instructional designer do? How can we lay our hands on this goodness?

The answer is simple and quite unexpected. Get yourself Figma. Figma is a free (for individuals) browser-based prototyping tool that I've used extensively for my UX Designer course and fell in love with. The learning curve can be quite steep if you learn it from scratch, I have to be honest, but it is definitely worth it. If you want to jump on the whole "I'm a learning experience designer" bandwagon and you can learn only one UX-related tool, then go for Figma.

My romance with Figma aside, in this case it has one undeniable benefit. It can open Sketch files and it can export your creations as PNGs. Make that two benefits. And here's how to reap them.

Set up

First, set everything up in 6 easy steps:

  1. Register with Figma.
  2. Download the .sketch file from 
  3. Open Figma 
  4. Import the .sketch file
  5. Ignore any concerns about missing fonts.

Basic modifications

With the file open, select any of the humans, for example, in the "Basic" frame.

If you click around, you'll notice that each human consists of fours parts:

  • head
  • body
  • legs
  • shoes
Go ahead and click on any of these four parts, for example, a head. Then, take a look at the right-hand panel. You'll notice an "Instance" dropdown in it.

Click on this dropdown and you will see all the different heads that you can use on this model. Pick a head and it will automatically replace the current style. Same will work for other components, such as body, legs and shoes. Note that each shoe will have to be selected separately. 

So, playing around with the component instances, you can immediately create a slightly different scene than the one we've started with:

With this knowledge unlocked, you can modify any of the models/compositions already included in the file, so try it out. 

Rotating things

If so far you've been working with objects in the Basic frame you may notice that although you can rotate the complete humans, you can't move, rotate or resize heads or legs. This is because these two guys are components themselves. So their heads, legs etc. are components within components. Unfortunately, anything anything that is contained within components cannot be moved or resized in Figma. But do not despair! If you look at the frame "Separated components", you'll see that the humans in there are groups of components. 

Component on the left (note the purple frame) and a group on the right (not the blue frame).

On the image above the character in the red jacket is a group of separate components and therefore can be rotated and resized as you wish. For example, like this:

Rotating components

You may, however make more advanced changes, like moving legs and arms separately. For example, maybe we don't like this sad doctor sitting rubbing her knee, we want her to strike a flying hero pose. 

As you remember, I mentioned that anything within components cannot be resized or repositioned. So, since sleeves and hands are element of the "body/jacket" component, you can't resize them unless you break your component into separate objects. The consequence of this action is that you will lose the ability to "switch appearances" by selecting component instances (as the component will cease to exist). So, if you made up your mind about the character's clothing and are ready to move her arms and legs, select the component you want to modify, right-click and choose "detach instance":

The component will be broken into pieces it's made of.

Note: at the moment Figma doesn't do this procedure well, so once you've clicked "detach instance", you should press Shift-Ctrl-G on the keyboard to get a proper set of separate objects. With this done, feel free to move and rotate them as you wish. You can also group objects (such as arms and hands) with Ctrl+G to rotate them together.

Striking a pose


You can export your creations as PNG, SVG or JPG. To do this, select all elements you want on your picture and group them together. Then, in the right-hand panel, find Export menu. There, click +, adjust export settings as you need and click "Export Group". 

Et voila! You can now open Sketch files in Figma and do basic editing. Take that, Apple!

November 28, 2018

User Research Participants Wanted

As part of my UX Designer course I’m working on a website that helps people to get an expert’s advice on tax or finance, book their services or have a video call with an expert. To evaluate and improve the website’s usability, I’m looking for those who would be interested in trying out its prototype.

What will you need to do?

I’ll ask you to complete short tasks (for example, to search for an accountant) using the website’s prototype. While “tasks” may sound daunting, rest assured that their purpose is to test the usability of the prototype, not your skills. In addition, I’ll ask for your personal opinion about your experience using the prototype.

Technical Requirements?

We'll chat over Webex webconference, you'll visit the prototype and share your screen with me so that I can see your interactions with it. This means that you'll need to:

  • have a stable Internet connection
  • install a browser addon (or an app, if using a tablet/mobile) to join the conference. 
  • have a microphone so we can chat
  • ideally, have a webcam (this is optional, but very helpful)
  • share your screen

Note on screen sharing: if you're joining from a computer, you will be able to choose the screen or program that you're sharing. However, if joining from a tablet/mobile, your complete screen will be shared. If you're concerned about privacy, I would recommend joining from a computer.

Please note that the session, with your permission, will be recorded for study purposes. The recording will not be shared and any findings will be reported in the summarized form only. The findings will not contain information that would help others identify you (e.g. your name or voice).

When and where?

Date range: 5th to 8th of December 2018
Time zone: GMT +1 (see here for exact time slots)
Location: remote via Lookback or Webex

How long?

Approximately 20-25 minutes.


Then first, let me know which time would work best for you: (all times will be displayed in your time zone, so you don't have to do any calculations).

I will then follow up with you via email and send you the detailed information about the session, an informed consent form, as well as the instructions on joining the webconference.

Feel free to reach out to me via email or Skype ([email protected]) if you have any questions or concerns.

September 7, 2018

Storyline Tutorial: Counting Answers

Sometimes we want to make sure that users attempt an interaction or select the right amount of options. While such mechanics alone are not helpful in ensuring that "people learn", they are great if you want to customise feedback or course content, or if just prevent accidental errors and attention slips.

While Storyline makes it easy to check if the user selected all, one or none of the options, when it comes to n-amount of answers out of many, things get a bit more complicated. In the following example, we'll solve this problem by creating a simple interaction that will count the amount of answers selected and will warn the users if they have selected too few or too many. In this case, the users will have 6 options to choose from and they need to select exactly 3. You can see the finished example here

I've created a Storyline 360 file to get you started (download it here). But if you wish, you can follow this tutorial in your own. Here are the elements you'll need:

  • Objects to choose from (make sure they have the "Selected" state)
  • Submit button (either your own or the one in the player).
  • Layer with feedback for less than 3 answers selected
  • Layer with feedback for more than 3 answers selected

With the set up out of the way, time to add some interactivity. First, create a number variable "AnswersSelected". As the name suggests, it will be used to track the number of answers selected.

Then, select the object "Option 1" (if you're using the tutorial file) or any interactive object (if you're working in your own project) and create the following trigger:

At this point you can add a text box with the variable reference to the slide and preview the slide to check if the trigger is working:

1 is the value of AnswersSelected variable.

If it is working and the variable value is changing with the object state, go ahead and copy this trigger to all other objects. As you paste the trigger, you will notice that all trigger references to "Option 1" automatically change to "Option 2", "Option 3" and so on - depending on which object you are pasting the trigger to. As you can see, copy/pasting triggers can save heaps of time (but only if you make sure the trigger works before copying it).

As you have most likely noticed, at this point the value of the AnswersSelected variable will increase each time an object is selected. So, if you select and de-select the option "Bacon" three times, the course will think that you have selected three answers. So, we need to find a way to track when an answer is de-selected.

These are definitely not the results we are looking for...

Since we add 1 to the variable value whenever an object is selected, we will create trigger that substracts 1 if the same object is de-selected. To achieve that, create the following trigger:

Now, preview the slide once again and verify that the variable value changes correctly when you select and de-select the same object. If it is working (and it should), copy it over to other objects.

With all triggers added, preview the slide once again and check if the variable value changes correctly. At this point it should, but in general, I recommend verifying your work often, so that you can pick up any issues early on and therefore troubleshoot them easier.

Although it is clear that bacon is the only choice that matters!

For the final touches, create a trigger to show feedback for less than 3 answers selected:

Then, copy/paste this trigger and change the name of the layer and the value of the variable to show feedback for more than 3 selected answers.

Finally, create a trigger to allow the user to continue if they selected exactly 3 answers:

And that's all there is to it - preview your work and enjoy!

July 20, 2018

Instructional Design Walkthrough: Addressing Performance Issues with Training

In this post I would like to illustrate my instructional design process, based on one of the training solutions I've designed in the past (although some details have been adjusted to preserve anonymity and confidentiality). The purpose of this walkthrough is to outline, in broad strokes, the instructional design process from the initial request to the final product and highlight the main deliverables of each phase.

As you will notice, my process is based on ADDIE. Although it is very common these days to dismiss ADDIE as "outdated", "non-agile" and "not a framework", I hope that with this post I can show that it's not as bad as it seems and a big part of success can depend on you as an instructional designer, no matter what way of work you're using.

Initial Request

Company A's Quality Department discovered that Customer Service Agents (CSAs) were not correctly handling customer contacts related to issues with the discount codes, which led to monetary losses. Following the discovery, Quality Manager (QM) requested the development of a 2-hour e-learning module to be rolled out to all CSAs globally in 5 different languages, as well as to be incorporated into the new hire curriculum.


As the Instructional Designer in charge of this project, I held a discovery meeting with the QM followed with a meeting with a subject matter expert (SME), who had not only the knowledge of the subject itself, but could also provide trustworthy insight into daily work, motivations and challenges of the CSAs.

During this meeting I focused on the answers to the following questions:
  • What is the business goal of this training and how will we measure the success?
  • What does the data tell us about CSAs' performance in each region?
  • What documentation, if any, is already available to the CSAs?
  • What are the typical or recurring mistakes?
  • Why do these mistakes happen?
  • What do we want the CSAs to do instead?
Based on the discovered information, I:
  • Reduced the target audience to the regions where performance issues were confirmed by data
  • Identified real causes of the performance issues, such as lack of practice in the investigation of promotional issues or the fact that performance issues were rooted in the good intentions, but wrong assumptions about the consequences of the actions taken. In other words, CSAs genuinely assumed that they were doing something good. 
  • Formulated the training goal and performance-oriented learning objectives
  • Reduced the projected training time by identifying performance issues that could be addressed through training
The changes to the project scope and learning objectives were signed off by QM and the SME. The reduction of training time and target audience was received particularly positively as it reduced the unnecessary costs of the project.

Design and Development

After completing the analysis, I:
  • Created a course blueprint outlining the identified performance issues with matching instructional activities and, where necessary, sources of theoretical information required to complete these activities.
  • Iteratively designed an e-learning module, beginning with a low-fidelity prototype in the PowerPoint. 
  • Guided SME through the reviews of the prototype and adjusted it based on SMEs feedback.
  • Developed the final module, featuring software simulations and branching conversation scenario in Storyline.
The module featured:
  • Short introductory scenario to introduce the learners to the consequences of their choices
  • Worked examples of the troubleshooting cases
  • Several troubleshooting simulations where the learners were required to use the simulated software environment and available information sources to correctly identify the customer's issue and take appropriate action. Upon completion of each scenario the learners received detailed feedback on their actions.
  • Branching scenario based on the conversation with a customer, where the learner needed to positively present the troubleshooting process and correctly set customer's expectations regarding the outcome of the investigation. 
The finished course was reviewed and tested by the QM, two SMEs who did not participate in the development process, as well as the CSA trainers. It was not possible to test the module on the CSAs themselves due to constraints on their availability.

Implementation and Evaluation

Following the rollout of the training, it was evaluated on the following levels:
  • Reaction of the learners
  • Changes in performance
  • Business impact
As an instructional designer, I designed the reaction evaluation survey and gathered the response data via LMS. I collaborated with the Quality Department on gathering and analysing data for the performance chance and business impact. The change in performance was measured as the decrease in wrong solutions and increase in the reports of the issues via correct channels. The business impact was measured in terms of reduced financial losses related to mishandled promotional issues.

Overall, the training led to statistically significant changes in performance as well as considerably positive business impact. On the level of learner's reaction 90% of respondents stated that the training has provided them with valuable practical knowledge and 96% of respondents stated that they will be applying new skills in their daily work. The comments from the learners highlighted the benefit and importance of simulations and scenarios.

Due to the success of the training and following the consultation with Customer Support trainers and leads, it was included into the standard new hire training to ensure that the performance issues were being not only treated, but also prevented.

February 11, 2018

Pilling a Cat: How Training Actually Works

Whether you are an instructional designer or a customer of external training providers, you may succumb to the idea that success of a training depends on:
  • Length (microlearning!)
  • Format (engaging videos!)
  • Digital delivery (online learning is the best!)
With all the buzz in the media, it's hard to resist this opinion. The idea of micro-videos seems to be more enthralling than pie charts. So here's a simple way to see if the "learning nuggets" and "attention-grabbing videos" produce any return on investment:
  • Watch this short YouTube video about giving a pill to a cat.
  • Get a real cat and try giving it a pill (or a delicious snack they are not in the mood to eat).
Assuming you're new to the task, I'm quite sure that the outcome will depend not on your skills, but on the cooperation of the cat. In other words, it's not your mastery, but the ease of the problem that will define the outcome. As soon as you face a cat that deviates from the example in the video, you will most likely be at a loss (quite possibly a loss of blood, too). So much for the learning nuggets. 

The format of the "nugget" doesn't matter. Whether it  is a video, a drawing, or a drag and drop activity to arrange the steps in the right order. None of these will lead to improved performance post-training. 

The reason is simple. The video contains the basic information: hold the cat's head, aim the pill at the back of its tongue, etc. There is nothing wrong with this information. It is useful and worth knowing before you approach a cat, as it can save you some time and trouble of experimenting. But the information is not enough.

Witnessing a demonstration of an ideal process does not necessarily prompt deep-level processing. In fact, it may lead to the false sense of competency. It's like looking at abstract art and claiming that anyone can do it. On the other hand, engaging with a real cat in the real world, gives you a reality check and stirs up a lot of questions, for example: 
  • How hard can I hold the cat's head without causing damage?
  • Can a cat bite its tongue if I try to close its mouth?
  • If the cat is making noises, which can I ignore and which are the signs that I'm hurting the cat?
  • What to do if the cat mastered tongue-wriggling and pill-spitting quicker than I mastered cat-pilling?
In the ideal situation, after grasping a basic idea of what we're supposed to do, we would venture forth and try to pill different cats with the gradually increasing level of difficulty. We would then reflect on our experience and seek ways to improve the outcome next time. This, and not the format of the presentation, would lead to the true engagement with the subject matter and acquisition of mastery.  

Of course, one may ask - how would it be possible to achieve all of this in an e-learning module? I would say that this is a wrong question to ask, since it focuses on the format. Don't put the format before the goal. Look past isolated events and their formats. Consider performance improvement as a process spanning time and variety of contexts. For example, are the newly trained "cat pillers" assigned to cat pilling or do they do inventory? Do mentors observe their work, encourage reflection and provide feedback? Or do they schedule a perfunctory monthly meeting to listen to the learner's self-report of their mastery? Do the cat pillers have access to supplementary tools to aid their performance? Can they use these tools? 

In short, there is nothing inherently wrong with using videos or providing information. What's wrong is stopping there. Whether we design or buy a training program, it must not stop at the dissemination of information.  To achieve performance improvement, a full-scale training program would need to include:
  • Application of knowledge in novel contexts 
  • Realistic challenge 
  • Gradual increase of difficulty
  • Reflection and feedback
  • Continuation of the development post-training
  • Tools and processes that support performance post-training

January 8, 2018

Story-Based or Scenario-Driven?

Having a shared terminology is important as we use words to describe our reality, communicate ideas and achieve understanding. However, since many people step into the field of Learning and Development by following very different paths, not everyone in this sector uses a stable common language. Even the word "e-learning" can conjure up different images in the minds of different audiences. Add to this the need to communicate with non-L&D stakeholders who aren't highly interested in the semantics and the constant noise produced by marketing-oriented publications touting "story-driven action-packed gamified microlearning scenario-based videos" and you have a full picture of our messy reality.

The issue that I see particularly often is with the use of words "story", "scenario" and "case study". Recently I had to go review the offers from e-learning providers who, naturally, boasted of developing "practical scenario-based modules", which upon a closer inspection turned out to be the dreaded infodumps in disguise. While I do not aspire to lay the foundation for the new universal terminology, in this blog post I would like to reflect on these misused terms, and take a look at what they mean and how can we tell them apart.


Let's start with the easy one. We all know what a story is - a narrative with protagonists and antagonists, beginning, climax, etc. They can be told in different ways and employ different techniques to raise the audience's interest. However, the stories have a definite structure that is independent of the audience's actions, thoughts and desires. The story follows its predefined path from start to end.

Stories can be educational, enlightening, and inspiring, but when it comes to training in the sense of improvement of performance and skills, stories are not enough. For example, I can tell you a story about how I designed a training. While you might get some ideas from it, if you're not an experienced instructional designer, this story will not really teach you how to become one and it will not have lasting impression on your performance. In essence, a story can serve as a frame within which a training is structured, but we still need to use activities, practice and feedback to achieve the training goals.

Case Study

Firstly, to add more complexity to the subject, a case study as a learning method can be confused with a case study as a research method. Secondly, I often see novice instructional designers who entered the field as SMEs writing stories and then christening them "case studies". For example, the novice designer may write a story about a patient who was misdiagnosed in a hospital, what happened as a result and what should have been done instead. This is not a case study in the slightest.

A case study presents the learner with the realistic challenge or question and contains supporting case materials, documents and data to be analysed -  the actual content will depend on the instructional purpose. It can and usually is based on a story, whether real or realistically imagined, but the story is used to provide context and realism for the task. The solutions are sought by learners and later discussed with a mentor or in a group setting. Case studies are best used for challenges that don't have very specific solutions and where analytical thinking, argumentation and evaluation of different perspectives is important. For instance, using a previous example of the patient - a case study would be giving learners the patient's history and then asking them to come up with the diagnosis and justify it with the evidence from the case materials.


A scenario is often the most elusive concept to describe (especially since it can be very synonymous with a story), so in this case I will borrow the definition from Ruth C. Clark (2013, p. 5):

"Scenario-based e-learning is a pre-planned inductive learning environment designed to accelerate expertise in which the learner assumes the role of an actor responding to a work-realistic assignment or challenge, which in turn responds to reflect the learner's choices."

As we can see from the definition, what makes scenario different from a case study or a story are these factors:
  • Learner has an active role
  • Learner solves a realistic work challenge
  • The environment responds to the learner's actions 
In contrast, the story/parable does not include the learner as an actor, as they are simply observing the events that unfold. The case study, while asking the learner to work on a realistic task, does not allow them to see the results of their proposed solutions. The results can be hypothesized or imagined, but never really experienced. A scenario, however, presents the learners with choices, challenges and realistic consequences or responses. 

I would note here that in my experience, scenarios are very often associated, sometimes almost exclusively, with "branching" and "dialogues". However, "branching" is a purely technical term that usually makes sense when scenario is developed in a slide-based (or screen-based) software and dialogues are just one example of a work-related challenge. Alternative scenarios could be making a perfect cup of coffee or carrying out a medical procedure. 

Now What?

Having said all that, I have to admit that for people like me, who appreciate the power of radical clarity, it is often natural to engage in petty discussions about whether a "true" scenario should be branching or not, or whether a short video is a micro-, nano- or ɰ-learning. Such discussions, particularly on social media where argumentation should be tastefully omitted for the sake of brevity and witticism, are very enjoyable but, as many pleasant things in life, rather unhealthy. The practical purpose of terminology is not to rigidly label every concept in our reality (or die trying), but to facilitate common understanding and the ability to look beyond attractive labels and see the true nature of "scenario-driven interactive experiences", as not all of these are created equal. 


Clark, Ruth Colvin (2013) Scenario-based e-Learning: Evidence-Based Guidelines for Online Workforce Learning, Pfeiffer

August 31, 2017

My E-Learning Design Process: Taking out the Trash

It will sound strange, but it's true: the most fascinating part of my life in Germany is recycling. To be more precise, the sorting of trash. Wait, don't go yet, this will actually be about e-learning! To give you an idea of the importance of this question, here's a photo of a document I received in the mail some time ago. If you're curious, it contained the news about the new color of our "bio-trash" bins. Serious business.

So, do you receive important news about your trash very often?

Of course, when I saw the title ("Keep It or Toss It") of this week's ELH Challenge, I could not resist. I had to come up with an interaction dedicated to the complex intricacies of the trash sorting. You can see my submission here.

In this post I'd like to talk about the making of this interaction, focusing on three points:
  • Thought process behind the making of this interaction and some instructional design.
  • Fast and efficient way of making a drag and drop interaction without the "Freeform" option. 
This is a reflective post and not a tutorial. I often enjoy reading reflective posts by designers and developers, as it helps me understand their thoughts and approaches to the task at hand. So, I hope you will enjoy it too, particularly if you're at the beginning of your e-learning development journey. For your convenience, I've summed up some "lessons learned" after each part.

Instructional Design

Yes, there's actually some thought and not only humor in this small piece. As you will notice, it doesn't have any theory or "help resources" included to support the learner. This is neither due to the lack of theory (there are plenty of schemes and manuals), nor an omission by accident. 

In fact, I first thought about adding an explanation of which trash goes where. But, my intention was not to test people's memory. Instead, as is my usual approach when designing training, I wanted users to learn by making assumptions and testing them. In a real life, you probably wouldn't read a manual about taking out the trash. Instead, you'd separate it however you feel is logical and be done with it. Thus, the only difference from life in this case would be an opportunity to get feedback on your assumptions.

Speaking of which, my original intention was to include the most confusing trash items. In fact, I selected 8 items at first, but decided to cut it in half, considering that ELH Challenges are usually short.

In short: 

  • Create life-like contexts and tasks
  • Devise activities based on popular misconceptions


It took me slightly less than 3,5 hours to create the "course" from complete nothing to finish. 

This may seem like a huge amount of time to spend on something as simple as 6 trash cans, a draggable object and some text. That's absolutely true, but only if you have a solid idea or a prototype to work from. Getting to this prototype is what's complicated and requires time. The biggest chunk of time (around 2 hours) was spent on ideation - coming up with an idea, scouting for available assets, choosing fonts and colors, and deciding on the final look. The rest was spent on creating assets, slides, interactions, as well as writing feedback, publishing, bug-zapping, and, most importantly, admiring the end result.

When I work on ELH Challenges, I do some formstorming and play with different ideas on paper before choosing one and developing it further. I have to say, unless a brilliant idea suddenly dawns upon me from the start, the more I engage in the formstorming, the better the result. So, the time spent on it should not be seen as a waste. This may be an obvious statement, but if you're locked in a "rapid e-learning development" environment, it's hard to stick to this opinion.

Formstorming is something I've learned in the Fundamentals of Graphic Design MOOC. As Lupton and Phillips (2015, p.13) define it: "Formstorming is an act of visual thinking - a tool for designers to unlock and deepen solutions to basic design problems. [...] Formstorming moves the maker through automatic, easily conceived notions, toward recognizable yet nuanced concepts, to surprising results that compel us with their originality." There are different ways to do it, but the approach I use most often is to create as many iterations of a subject as possible. If you're interested, this is a great example of 100 iterations of a letter A.

In this particular case, however, everything was defined by the trash cans. I made these directly in Storyline, so the rest of the module had to match in form and style. Still, even with this quite specific goal in mind, there were some questions to mull over:

  • How do I make sure that the user identifies the material of an object correctly? Since the assets are not photographic, it might not be obvious of a bottle is made of glass or plastic.
  • Where to place the drag object?
  • Where to put initial instructions?
  • Where and how the feedback will appear?
  • What about a progress indicator?
  • Should I add sound effects?
  • Should I add some limits to the amount of mistakes?
  • Fonts?
  • Colors?
And probably some more. I went through approximately 10 different slide design variations before settling on the final version. For example, I've tried adding a wall behind trashcans and writing the object description in the graffiti-like font (it didn't work well with a "cutesy" flat design) or adding a progress tracker.  

In short:

  • Formstorming is not a waste of time, because...
  • The more design questions you answer before starting with the actual development, the faster you'll develop.
  • You can formstorm in Storyline, but I recommend starting on paper first.
  • Next time someone asks you, "But how hard can this be?!", you can show them this post.


As I often say, once you know what you're doing, development is easy. In this case I followed my own advice and finalised one activity slide, before copying it several times and making small adjustment.

I didn't use any special tricks to create the activity. It is made from scratch, but you can achieve the same effect with the "free form" option. I prefer my own triggers, unless I'm really pressed for time (mostly because I feel more in control). The triggers are very simple:

"What's that with the layer 'Object?", you might ask. Excellent question. The layer "Object" is actually showing the description of the trash item:

Object "name" and description on a separate layer

The purpose here is to automatically get rid of this text when the feedback layers appear (instead of hiding it on each layer's timeline). I sometimes do this when I have layers and need to either hide a lot of objects at the same time and quickly, or show something only once. While this is not hugely beneficial for an interaction with just two layers, if you have to do this for let's say 10 layers, you begin to see the benefit.

The only other point I would highlight here, is that it might be tempting to see each trash can as a separate object and set up the triggers for each individually. In this case, however, as you most likely noticed from the triggers, I used two hotspots instead: a small one for the correct can and a big one, spanning across the slide, for all the others:

Green (correct) hotspot is placed over the red (wrong) one.

This way it was easier to create additional slides by duplication, as I didn't need to re-do the triggers at all. Instead, I simply moved the "Correct" hotspot to the right bin. An optional touch was to hide incorrect bins on the feedback layers, but this was also easily adjusted.

In short:

  • Consider moving objects from the base slide to a layer, if you want to consistently hide them when other layers appear.
  • Avoid extraneous work whenever possible (do you really need to have 6 drop targets where 2 are enough?).
  • My advice from this post is actually good. :) 


Lupton, E. and Phillips, J.C., Graphic Design The New Basics, New York, Princeton Architectural Press, 2015.

Liked this post? Hated it? Want to hire me or get in touch? Let me know in the comments below or ping me on LinkedIn. I also do freelance projects.