Value Added versus Liability Sponge

Someone who always gets me thinking is danah boyd.  Her post “Toward Accountability: Data, Fairness, Algorithms, Consequences” is the latest to prod my brain!

liability spongeHer post raises the issue of how data collection and data manipulation are not neutral activities…that the decision to collect or not collect and the thought process behind the analysis of data have value implications.  An example she used was around open data and how the transparency of data about segregation in NY schools led many to self-segregate, leading to more segregation, not less.  In another example, she noted how Google’s search algorithms picked up racist biases by learning from the inherently biased search practices of people in this country.

danah noted toward the end of her post:

“But placing blame is not actually the same as accountability. Researcher Madeleine Elish was investigating the history of autopilot in aviation when she uncovered intense debates about the role of the human pilot in autonomous systems. Not unlike what we hear today, there was tremendous pressure to keep pilots in the cockpit “in case of emergency.” The idea was that, even as planes shifted from being primarily operated by pilots to primarily operated by computers, it was essential that pilots could step in last minute if something went wrong with the computer systems.

Although this was seen as a nod to human skill, what Madeleine saw unfold over time looked quite different. Pilots shifted from being skilled operators to being liability sponges. They were blamed when things went wrong and they failed to step in appropriately. Because they rarely flew, pilots’ skills atrophied on the job, undermining their capabilities at a time when they were increasingly being held accountable. Because of this, Madeleine and a group of colleagues realized that the contexts in which humans are kept in the loop of autonomous systems can be described as “moral crumple zones,” sites of liability in which the human is squashed when the sociotechnical systems go wrong.”

These two paragraphs seem to provide some context to the chapter I am currently reading in Tom Friedman’s new book, Thank You For Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations.  In “Turning AI into IA”, Friedman suggested that a part of the solution to dealing with the triple accelerations of technological change, global market change, and environmental change, lies in leveraging artificial intelligence into intelligent assistance, intelligent assistants, and intelligent algorithms.  Friedman noted that in this age of accelerations, we need to rethink three key social contracts – those between workers and employees, students and educational institutions, and citizens and governments.

I totally agree that the status quo is not the answer, whether we are talking corporate structures, higher education, or government.  I worry though the extent to which some would push technology as an answer to higher education.

I firmly believe that integrating digital technology into teaching and learning makes sense…if one starts with the learning outcomes first and chooses the technology for the right reasons.  TPACK still resonates with me!  Smart technology could easily take the place of repetitive practice work, freeing faculty to focus on the underlying critical thinking skills that students must develop in order to succeed in tomorrow’s world.  My worry would be faculty that see the opportunity to place their courses on autopilot while they pursue their research interests.  Like the pilots above, teaching skills could atrophy…setting faculty up as liability sponges if students fail.

Friedman made an interesting observation – that when ATM’s became common in banks, there was an assumption that they would replace bank tellers.  Instead, by reducing the cost of operation, ATM’s made it possible to open many more branches…and the number of tellers increased.  They no longer handled as much cash, but they became instead points of contact with customers.

If one visualizes the higher ed equivalent of an ATM, one might see a future for higher education that involves lower cost, more locations, and more faculty….but faculty “teaching” in new ways.  Now is the time to have those conversations about the future of higher ed, the future of faculty, and the future of learning.  We need to be proactive before we find ourselves in a moral crumple zone of our own making.

{Graphic: Mishra & Koehler}

Impact of Prior Knowledge on Teaching

In our GRAD 602 class last week, we spent some time surfacing our students’ beliefs about learning.  As Laura noted in her blog, we “dabbled in the classic ‘See one, do one, teach one.'” From there, we then discussed some of the work of John Bransford on How People Learn, as well as the opening of John Medina‘s presentation on Brain Rules.  Lastly, we showed a section of the documentary A Private Universe, which asked recent Harvard graduates to explain the reason for the seasons.  Twenty-one out of twenty-three incorrectly stated (with conviction) that the reason for the seasons was due to the earth being closer or further from the sun, rather than the tilt of the axis and direct versus indirect sunlight.  The documentary pointed out the strong impact of prior knowledge driven by exaggerated pictures in elementary school textbooks of the earth revolving around the sun in an ellipse rather than a circular orbit.

Orbit Drawing

It was obvious as time ran out that we had made our students uncomfortable.  We hope to move this Thursday into some of the reasons for that discomfort, but it gave us a starting point for this week’s podcast:

{Graphic from ProProfs Flashcards}

 

Enhanced by Zemanta