Defining Online: Ask the Machines?

Dave Weinberger had a very interesting post on Backchannel last week that suggest AI now has knowledge we will never understand.  Dave noted:

“We are increasingly relying on machines that derive conclusions from models that they themselves created, models that are often beyond human comprehension, models that “think” about the world differently than we do.”

He goes on to say that we have long been trying to simplify the world – think Einstein attempting to find a Unified Field Theory to tie together relativity, electromagnetism and gravity.  This new machine way of thinking may suggest “simple” is wrong.  Google’s AlphaGo program can now beat anyone playing Go…but cannot teach you how to play Go.  It thinks differently than its human competitors.

Dave noted that for thousands of years, we acted as if the simplicity of our models reflected the simplicity of our universe.  We are beginning to learn that with an almost infinite number of interrelated variables, the real world is too complex to “know.”

In Tom Friedman’s book, Thank You For Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations, he suggested that Mother Nature provides a good model for facing the future, as Mother Nature has been adaptable and resilient for billions of years.  Mother Nature has shown the ability to adapt when new knowledge becomes available, the ability to embrace diversity, the ability to balance micro- and macro-level processes, and the ability to change in a sometimes brutal fashion.  In other words, Mother Nature crunches the numbers and does not attempt to simplify the models.

I am not sure why..but this all came to mind as I read Tony Bates’ post “What is online learning: Seeking definition.”  Tony described a new survey going out to all Canadian institutions of higher education, seeking to understand the future direction of elearning in Canada.  He noted that simply defining “online learning” has proved problematic, as different institutions have somewhat arbitrary definitions of online, blended, hybrid, and even the differentiation between credit, contract, and free courses.  Over the past two decades, I have run into similar issues trying to define what is meant by online learning.  I have run the gamut from structured courses run asynchronously (and sometimes synchronously) through LMSs…to “It’s All Frigging Online!” – meaning that we are all now so interconnected that “online” is simply a continuum by which learning can be facilitated…but that continuum rarely approaches zero.

Tony noted that the results will be available in early September, and I suspect his team will learn from these results.  I also wonder if sufficient data already exists in the cloud that machine intelligence could look at the same issues and come to new understandings which might be difficult for us to understand?

It is amazing that we live in an era where contemplating that is even possible!

{Graphic: The Vital Edge}

Value Added versus Liability Sponge

Someone who always gets me thinking is danah boyd.  Her post “Toward Accountability: Data, Fairness, Algorithms, Consequences” is the latest to prod my brain!

liability spongeHer post raises the issue of how data collection and data manipulation are not neutral activities…that the decision to collect or not collect and the thought process behind the analysis of data have value implications.  An example she used was around open data and how the transparency of data about segregation in NY schools led many to self-segregate, leading to more segregation, not less.  In another example, she noted how Google’s search algorithms picked up racist biases by learning from the inherently biased search practices of people in this country.

danah noted toward the end of her post:

“But placing blame is not actually the same as accountability. Researcher Madeleine Elish was investigating the history of autopilot in aviation when she uncovered intense debates about the role of the human pilot in autonomous systems. Not unlike what we hear today, there was tremendous pressure to keep pilots in the cockpit “in case of emergency.” The idea was that, even as planes shifted from being primarily operated by pilots to primarily operated by computers, it was essential that pilots could step in last minute if something went wrong with the computer systems.

Although this was seen as a nod to human skill, what Madeleine saw unfold over time looked quite different. Pilots shifted from being skilled operators to being liability sponges. They were blamed when things went wrong and they failed to step in appropriately. Because they rarely flew, pilots’ skills atrophied on the job, undermining their capabilities at a time when they were increasingly being held accountable. Because of this, Madeleine and a group of colleagues realized that the contexts in which humans are kept in the loop of autonomous systems can be described as “moral crumple zones,” sites of liability in which the human is squashed when the sociotechnical systems go wrong.”

These two paragraphs seem to provide some context to the chapter I am currently reading in Tom Friedman’s new book, Thank You For Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations.  In “Turning AI into IA”, Friedman suggested that a part of the solution to dealing with the triple accelerations of technological change, global market change, and environmental change, lies in leveraging artificial intelligence into intelligent assistance, intelligent assistants, and intelligent algorithms.  Friedman noted that in this age of accelerations, we need to rethink three key social contracts – those between workers and employees, students and educational institutions, and citizens and governments.

I totally agree that the status quo is not the answer, whether we are talking corporate structures, higher education, or government.  I worry though the extent to which some would push technology as an answer to higher education.

I firmly believe that integrating digital technology into teaching and learning makes sense…if one starts with the learning outcomes first and chooses the technology for the right reasons.  TPACK still resonates with me!  Smart technology could easily take the place of repetitive practice work, freeing faculty to focus on the underlying critical thinking skills that students must develop in order to succeed in tomorrow’s world.  My worry would be faculty that see the opportunity to place their courses on autopilot while they pursue their research interests.  Like the pilots above, teaching skills could atrophy…setting faculty up as liability sponges if students fail.

Friedman made an interesting observation – that when ATM’s became common in banks, there was an assumption that they would replace bank tellers.  Instead, by reducing the cost of operation, ATM’s made it possible to open many more branches…and the number of tellers increased.  They no longer handled as much cash, but they became instead points of contact with customers.

If one visualizes the higher ed equivalent of an ATM, one might see a future for higher education that involves lower cost, more locations, and more faculty….but faculty “teaching” in new ways.  Now is the time to have those conversations about the future of higher ed, the future of faculty, and the future of learning.  We need to be proactive before we find ourselves in a moral crumple zone of our own making.

{Graphic: Mishra & Koehler}

Exploring the Intersection of Leadership and Technology

I am always stoked when I get a chance to teach ILD 831 for Creighton University.  This course in their Interdisciplinary Doctorate in Leadership Program has an eclectic group of leaders from around the world exploring the impact of technology in general and the internet in particular on leadership in organizations. Through this examination, these students struggle with how leadership does (or should) adapt to a changing world. In the past decade, the internet has certainly become a part of life and work. The internet has moved from a virtual space where people went to find information to an active place that is open, social and participatory. This shift has profound implications on leadership. How does a leader manage information (and knowledge) when the sum of all human knowledge is available to anyone in her or his organization from their smartphone? How is communication evolving? What are ethical issues associated with networked employees, students, or patients? What is on the horizon? This course gives students the opportunity to explore leadership mediated by a digital world.

My course map shows the flow of this 8-week course, which is starting this week:


This Spring class has teachers in K-12 and higher education, technologists, industry managers, a fire chief, and the CEO of a health system.  I always love the mix of experiences these students bring to this examination.  As we move through these eight weeks, they will all be blogging.  You can see their posts – and interact with the class – at our Netvibes site.

ford_riseofrobotsThese are interesting times to examine this intersection.  I am currently reading Martin Ford’s 2015 book, The Rise of the Robots: Technology and the Threat of a Jobless Future.  It paints a rather bleak picture around the idea that technology – and in particulary artificial intelligence – is creating a future where lots of jobs are eliminated but few jobs are created in their place.  In other words, according to Ford, we face a future where unemployment and inequality will reach catastrophic levels.  Scott Santens in an article in the Boston Globe last week mirrored similar thoughts.

Last week in Medium, danah boyd discussed “What is the Value of a Bot?”  She noted that as systems get more complex, it becomes harder for developers to come together and develop “politeness policies” or guidelines for bots. She noted that it is getting increasingly difficult to discern between bots that are being helpful and bots that are a burden and not beneficial.  One of the key points she made:

Bots are first and foremost technical systems, but they are derived from social values and exert power into social systems. How can we create the right social norms to regulate them? What do the norms look like in a highly networked ecosystem where many pieces of the pie are often glued together by digital duct tape?”

This is the world these leaders are and will be leading in…and there are no easy answers.  I am looking forward to our dialogue on the open web over the next two months!