Thursday, October 24, 2013

Eli Pariser Keynote, #DevLearn "The Filter Bubble"

This are my liveblogged notes Eli Pariser morning keynote session on Thursday, October 24 at the eLearning Guild DevLearn Conference, happening this week in Las Vegas. Forgive any incoherence or typos. I am in Vegas, after all...

Eli Pariser, The Filter Bubble 

Facebook filters out what we say based on our perceived preferences. The sorting helps curate curate lots of data, but it puts us in a bubble where we only see things similar to us.

With only five data points, companies say they can predict with 80% accuracy some key things about you.

This shapes the products that are recommended to us...and what we see.

We don't all get the same Google anymore.

Google looks at 57 signals that say something about you: the computer you're using says something about your socioeconomic status, the browser you're using, where you're located...

Every major company is moving towards personalization and providing people with more relevant data.

This means we don't all see the same news...


There are three key problems with this:

1. The Distortion Problem
We get an inaccurate view of the world.

2. The Psychological Equivalent of Obesity

There's a pull within us -- we want great content but sometimes we just want to watch Ace Ventura a fourth time. It's a tug of war between our aspirational selves and our actual selves.

Great media sources help us balance our media sources.

We rely on these search algorithms more than we should -- there are still some things that human editors do a lot better:

1. Anticipation.
2. Risk-taking. (in restaurant recommendations, Chipotle always comes up. It's safe and we all mostly like it.  But it's not a risk.  These engines can't make that risk).
3. Provide the whole picture. Algorithms don't have that sense of how things hold together.
4. Pairing.
5. Social importance
6. Mind-blowingness. Think about the piece of media that most changed your life. It probably wasn't the most easy going. But it stuck with you afterward. Personalization doesn't give bonus points to things that stick with us -- Human editors are good with this.
7. Trust. I may not like football, but because I trust this magazine, I might read that article. This pulls us out of our comfort zone. Or friends make recommendations. Algorithms can't say "walk with me here because I think this might be something you find interesting.)

3. A Matter of Control
It's not really a choice if you don't know that you have it. A part of being human is knowing what choices you have so you can make your own decisions.  Right now, we're giving that choice over to code.

Eric Schmidt of Google: "They want Google to tell them what to do next." -- if that's true, then we need to make sure that these algorithms work in the right way.

The new gatekeepers are code. The code decides what information is most important. Without any sense of civic duty, which the best human editors have.

"Learning is by definition an encounter with what you don't know..." Siva Vaidhyanathan

The Google shields us from the radical encounters which can help us learn.

What can we do?

1. We need to make sure that the filterers are better. We need to make sure they're looking at a whole range of signals. That they're not just giving us what we like and what is similar to us...

2. We need to tailor data, not just based on what is relevant, but also other points of view and things that are challenging and uncomfortable.

We need to understand where the editor is coming from and what its point of view is.

Kranzberg's Law: "Technology is neither good nor bad, nor is it neutral."

3. Give students the tools to build better filters.

We create the Web. It is by no means finished.

The Web can be the technology that connects us to lots of new ways of thinking, that takes us out of our comfort zone.

You can turn off personalization on your Google search.  

If you google yourself, you'll get very different results than if someone else googles you.



No comments: