Blackboard’s Complexity Problems

Reactions to this year’s BbWorld are starting to roll in, and I’d like to add something to Michael’s recent post on Blackboard’s Messaging Problems. Yes, Blackboard could be better at communicating (can’t we all?), but I believe that Blackboard’s more challenging problems are rooted in the complexity of its software rather than the perplexity of its marketing messages. (Good work on Ultra though – that’s looking fantastic. If that’s what Ultra is. Which now I’m not sure what it is. But good work nonetheless. Though I haven’t come across anyone actually using it yet…)

Both confusing messaging and technical complexity present real problems, but while messaging can be (relatively) easily corrected, the technical complexity is a lot more difficult to untangle. Let’s examine.

Architectural Components

The most major complexity of Blackboard’s flagship Learn LMS is the sheer combination of supported configurations that it supports, even in its latest release. It can be run on different versions of Windows, or Linux, or Solaris. It can be run with databases including multiple versions of Oracle or SQL Server (PostgreSQL support was also rumored at one point, but I still don’t see it). Each of these can be run with different combinations of OS and database updates, hotfixes, and patch sets. The software runs on a version of Java which reached its end-of-life several months ago (and will no longer receive any updates, security or otherwise, else the Java version would be still another variable). It can run on virtualized environments or bare metal. It can be hosted by Blackboard or self-hosted. And these combinations don’t even consider the additional combinations that Michael’s post references – adding even more dependencies on software hosted or delivered by Amazon and though others.

The application that runs on top of all of these components is a well-intentioned but clunky amalgamation of many different pieces. It has flavors of the historical Blackboard and of the WebCT architects. There’s a little bit of ANGEL mixed in. Some of the design came from seasoned software architects with decades of experience while other bits came from media studies majors (especially one really smart one in particular). Some was built when the company was still a startup, others the artifacts of various rearchitecting efforts. Some pieces were jammed in or bolted on though various acquisitions.

Today, it’s not just the pieces that Blackboard itself owns that are important. In-line document rendering is provided by a third-party called Crocodoc. Parts of the video recording and embedding capabilities are provided by Google and YouTube, a fact that became painfully clear when all embedded course videos in all Blackboard environments across the world recently, apparently stopped working. Unlike companies like Instructure that can update every single one of their customers instantly and simultaneously when something breaks or D2L who only needs to focus on the Windows technology stack and is making steady progress towards periodic automatic updates for both hosted and self-hosted institutions, Blackboard still faces every combination of variables imaginable (and arguably still more after this year’s BbWorld).

To be fair, few other companies support this sheer raw complexity of configuration combinations, so kudos to them for holding it together for so long. In comparison, Microsoft supports an incredible array of Office product versions across Windows and Mac OSs and seems to do well compatibility-wise. Apple in contrast contains the number of combinations of OSs and devices it supports. I suppose WordPress is a worthy comparison in that it is hosted on any number of different combinations of OSs, databases, and PHP versions. But they seem to have it figured out, too, as the last WordPress update I ran took one click in a web UI and less than 10 seconds. Which leads me to my next point…

Modularity and Updates

Blackboard for many, many years has had this wonderful plugin framework called Building Blocks. It’s seriously cool (and I’m not just saying this because I spent a decade of my life building them). Building Blocks were “apps” 10 years before the concept became mainstream. And when Blackboard announced that they were (finally) modularizing the Learn product into their own Building Block plugin framework to make updates easier, I was really excited. The approach was rushed into use as a way to increase the pace and decrease the time to availability of bug fixes. This was a good goal, but it fast-tracked some thinking-through of important details. And as a result, when you had a bug, not only did the Blackboard Support team have to ask what version of Blackboard you were running, but also which specific versions of each plugin you were running.

Updating these plugins sometimes required restarting Blackboard services (ie: downtime); other updates did not.  Some updates had to be performed in a certain sequence, or else they would not work. Or one time I remember the official documentation specified an order, but it was the wrong one. Or once the specified order was technically impossible to achieve. There was also no real way to roll back once the install button was clicked. And so the risks to keeping the software up-to-date seemed to actually become even greater. Oh, and there were still releases/service packs and hot fixes to keep track of despite this welcome improvement. Sometimes those “upgrades” actually blew away the Building Block updates made between the release of the official updater/installer and the last batch of Building Block updates. And of course, there’s that pesky detail that every institution still gets to decide at its own pace and on its own timeline when to update.

APIs and Integration

Speaking of Building Blocks, there’s one last complexity to Blackboard’s software that can be observed indirectly through its APIs. You can comb through the latest API documents to make your own judgment calls, but from my technical vantage point, I see:

  • 6 representations of a “course”(AdminCourse, Course, CourseCourse, CourseSite, CourseVO, Organization)
  • 4 representations of “enrollment” (Enrollment, CourseMembership, CourseMembershipVO, StaffAssignment)
  • 3 or 4 representations of “user” (User, UserVO, UserInfo, Person)
  • 2 (public) representations of “grades” (anecdotally there are several others, which is why grades in their mobile app historically haven’t always matched the instructor gradebook which sometimes didn’t match the student view of the gradebook – and why it’s so hard to fix them all)
  • 10ish representations of “course content” (Content, ContentFile, ContentFolder, ContentVO, CSEntry, CSFile, BbFile, ChildFile, CourseDocument, LOItem (learning object))

Needless to say, it’s not only confusing for external third-party developers but likely to Blackboard’s own in-house developers which ones are the right ones, or best ones, or most-appropriate ones to use.

As was the case in 2013 when this blog started tracking LMS data, Blackboard still has the largest variance of installed versions of all the major flavors of LMSs (though to their credit, it is getting better). But the challenges are still far from over. I suspect foregoing the Spring 2015 release was one part of Blackboard’s strategy towards bringing all of their customers closer together and narrowing the version spread.

But my bottom line remains. Technical problems are much harder to solve than marketing ones, and the complexities we see here make every Blackboard installation unique in some nuanced way or another.  Staying this course is not sustainable.  If I were Blackboard, I’d focus on decreasing software complexity as an important goal rather than replaying the “rebundling the licenses” shell game. A good product may sell itself, but a complex one hastens its own morbidity.


xAPI: Developer Bootcamp Debrief

Until last week, I haven’t really had the opportunity to look much into what is now called the Experience API (aka xAPI) since it it was called “Tin Can” back in the day, but it has blossomed into something awesome.  xAPI’s origins are rooted in SCORM, a format used pervasively throughout the US federal government to build training materials. This new specification both builds upon and expands upon SCORM’s ability to deliver and track learning activities in ways that free this spec from its predecessor’s sometimes-challenging legacy.  It does this by making vast improvements to its learning analytics capabilities in a way that intelligently couples the delivery of educational materials with the underpinnings to support design thinking around which interactions and activities to capture for later use. Funded by the DoD, the Experience API was produced as a result of a Broad Agency Announcement – an RFP-like mechanism through which bids are evaluated through a peer or scientific review process.  This is one line item in the national defense budget that I, as a US citizen, fully support.

xAPI is part of a larger effort called the Training and Learning Architecture and provides both the mechanisms and data stores for capturing learning experiences – hence its name. It has been gaining traction by a growing number of adopters, has attracted the interest of large organizations like Amazon, and has appeared in guidance from both the Department of Education (page 55) and EDUCAUSE (page 9) with regards to educational technology choices and best practices. It has received mention in new development activities being undertaken by the open source Apereo Foundation (the new home of the Sakai LMS) and the UK’s JISC organization who is sponsoring the addition of xAPI support to Moodle.

The specification works by defining a format for logging user interactions with learning materials in an “Actor, Verb, Object” format – for example: “student watched video,” “student entered simulation,” “instructor provided feedback,” or “student started discussion.” Communities of practice and design cohorts involving hundreds of participants have formed around this concept, defining a vocabulary around these expression statements in a grassroots manner to provide rich contextualization around the types of learning experiences that could be recorded within specific contexts. To-date these include areas such as virtual simulations, augmented reality environments, and even subjects like foreign language and healthcare education. The expression statements that log learning interactions are human readable, and their contexts are not lost or buried under layers of technical jargon. Rather, they surface the sometimes-hidden but incredibly important application of instructional design concepts during the creation and development of learning content and experiences.

These expression statements are stored in what is called a Learning Record Store (LRS) using RESTful HTTP calls that support multiple authentication/security protocols. LRSs are powerful in that they not only record current learning activities for later reporting and auditing purposes, but they can also “jump start” new learning experiences by informing new learning tools, apps, and content about what you already know to personalize delivery.  LRSs provide different personal learning profiles that can allow keeping work training separate from schoolwork and hobby interests. LRSs also provide a framework for controlling access to which specific data each learning application is permitted to use.

The developer bootcamp was made available at no cost to participants, and both the source code and all presentations are openly accessible on GitHub. The sign-up process for accessing a developer sandbox is super simple and takes less than 30 seconds. (The source code to the LRS sandbox is also freely available.) The materials walk a newcomer through setup and creation of her/his first web-based xAPI-enabled content, observing example learning experiences recorded by an online JavaScript-based Tetris-like game, embedding xAPI statements into an open source HTML5-native virtual simulation environment, and reporting and visualizing all of the learning activities captured in the developer sandbox LRS. A separate track appealed to learning experience creators and instructional designers. Different perspectives and aspects of how to design support for learning analytics capture into learning materials and experiences were considered through participation in interactive exercises.

I also learned of many great, working, real-world examples of how xAPI is being used. For example, this open source project can record play time and user interactions with a YouTube video. This post details how an xAPI-capable beacon was embedded within a training mannequin used in an EMT training exercise. This page provides a number of case studies detailing how many different organizations and vendors are actively using xAPI for various educational purposes.

xAPI is not limited to web-based content and can be used across devices and delivery methods including wearable technologies, mobile devices, and e-readers. One participant in the DC-area developer bootcamp ran his personal Learning Record Store on a Raspberry Pi hanging on a lanyard around his neck. (I regret not getting a picture, though I wanted to respect his privacy.) The spec is neither reliant nor dependent on the existence of an LMS in any way.

xAPI appears to be gaining rapid traction as a solution for corporate and government training, but many universities were present at this bootcamp, too. The specification appears to be ideally suited for use cases related to professional, experiential, competency-based, and lifelong learning particularly when new learning experiences could be made more personalized or effective by having access to data about prior learning experiences. Best of all – while new, it is actually succeeding in the real world!

The entire xAPI spec is openly available on GitHub. Kudos and great work to this team for making great progress with this modern approach to learning analytics.

This post written by George Kroner