Building integrated networks to connect data across campus requires vast, and often unrecognized, investment from the CIO’s office. Recent IT Forum research suggests that CIOs disproportionately invest their resources in integration. Interoperability is crucial to campus life, but it is more labor intensive than end users realize.
When two electronic systems on campus need to work together, building Extract-Transform-Load (ETL) coding to move data between them is cumbersome and time consuming. Take, for example, a software solution that requires the input of students’ midterm scores. If a professor in one class gives two midterms, what is the best way to input that data into a single field? Should the software take the highest score? The lowest? Should it average and ingest the two scores stored in the LMS? Once professors agree on the course of action, itself a difficult process, then the IT team must build the coding to effect that integration.
Such efforts protract new software implementation timelines, while user requests for customization increase labor efforts and multiply points of complication. When vendors push updates to their systems, ETL pathways often break, and the Sisyphean cycle of integrating begins anew.
In the midst of these different systems and cycles of integration, data can be modified and manipulated beyond recognition and utility. The resulting inconsistencies in data leave CIOs without confidence in their data quality. Indeed, of the 29 capabilities the IT Forum’s Functional Diagnostic tracks, data governance and decision support are the weakest capabilities across all institutions.
IT Functional Diagnostic Results for CIO Business Intelligence/Analytics Capabilities
Data represents 112 ITF member institutions; CIOs score IT capabilities 0-4 on current maturity and institutional importance.
To support analytics—and save IT time—CIOs should focus on the data first.
Data liquidity: The gold standard of interoperability
When electronic systems share data, the quality of communication varies. Getting systems not only to talk to each other, but also to share high-quality data, requires CIOs to move beyond interoperability to embrace liquid data.
Interoperability is the ability of two electronic systems to exchange and use shared data information. Interoperability is often built around integrations and ETL coding that transforms data from one format to another, before loading it into new systems for use. Data is often flattened to facilitate transfer, which can limit the depth and utility of second-system outputs.
Liquid data can be entered in one location within a system, and then accessed across other systems or by other users. To be “liquid,” data must be structured, dynamic, and able to move through multiple systems.
In short, all data that is liquid provides the means for interoperability, but not all interoperability is built around liquid data.
As the gold standard of interoperability, data liquidity allows networks to:
- Transfer dynamic data quickly between systems
- Capture and transmit data in a searchable form
- Eliminate the need for time- and money-intensive ETL coding between systems
- Use a central system of record with appropriate authorizations in different systems to mitigate instance of error
For higher education, liquid data has the power to simplify data transfer, streamline interfacing systems, and significantly improve the quality of institutional analytics.
Beyond the horizon: The future of shared data in higher education
Intra-campus connectivity is a vital first step, but a larger issue looming in higher education is the need for inter-campus data sharing. Although a number of players now offer constituent record management (CRM) solutions to stand as systems of record in higher education, there is no “liquidity” between vendors. While structured data will allow institutions to easily move and analyze data, comparison and collaboration across institutions will present its own challenges.
What we can learn from health care
Since 1987, health care has used HL7, a common data structure centered on patient information. Developed by volunteers, HL7 provides data structures to define how disparate health care applications exchange clinical information. If app developers adhere to HL7 standards when building software, they can be sure that data systems in hospitals and clinics will be able to interface, and that patient data will migrate seamlessly between facilities.
Implications from higher education
With a more open marketplace for higher education on the horizon, the sector would be well served by a common data model. Savvy students looking to move faster toward degree completion by stacking credentials from different institutions will need their data to move between campuses as well as within them. Moreover, metric-driven learning might probe deeper into students’ educational histories, moving the beginning of student data collection beyond enrollment management and into the K-12 space.
Will the government get involved?
To move students and their data between institutions, private vendor solutions might not offer the necessary reach. The U.S. Government’s Common Education Data Standards (CEDS) Project, however, could be the humble beginnings of education’s own HL7. CEDS data structures hope to serve as industry standards for constituents from early learning through to post-secondary and the educational workforce. Though entirely voluntary—a significant barrier to adoption—the CEDS Project’s drive toward uniform data is championed by some big names in educational computing, and with widespread support could alleviate many of the sector’s administrative pains.
Although CEDS adoption—or a system like it—might be years in the future, CIOs should still take note. Successful institutions of the future will all leverage liquid data—data that is structured, dynamic, and useable.