Note: This paper has undergone major revisions since I posted it here. To download the laterst version, click on www.csupomona.edu/~rdwestfall/ais/relevancymanifesto.html
Recommendations for Categories of Relevant Research
Issues Contrary to Commercial Interests
Unsolved/Seemingly Intractable Large Problems
Examples of Unsolved Problems
Issues Not Economically Attractive to Commercial Firms
Research on IS Teaching
TECHNOLOGY AND THE IS ACADEMIC
Increasing Involvement with Industry Contacts
APPENDIX: Response to ISWorld Net Posting on Relevancy Crisis in IS Research
AN I S RESEARCH RELEVANCY MANIFESTO
I would like to thank Izak Benbasat, Ron Weber, and Rob Kling for their very helpful correspondence in response to my e-mail posting on this subject on the ISWorld Net list server. Although they may disagree with some or much of the content, their comments greatly helped me develop my thinking on this topic. In addition, I would like to offer a special note of thanks to the first-year professor who provided the very eloquent posting that I have included as an appendix to this paper. I also thank the many others, too numerous to mention, who responded to my posting.
Ralph Westfall received a Ph.D. in Management of Information Systems from Claremont Graduate University in 1997. He teaches via physical presence at California State University in Long Beach, and has plans to teach via distance learning technology at the University of Maryland-University College. He has published in Information Systems Management and has a chapter in The Virtual Workplace. He has presented papers at conferences of the Association for Information Systems and Decision Sciences Institute, and was an invited participant at a 1998 National Research Council workshop on information technology literacy. His research interests include the virtual office and other forms of remote work, distance learning, information technology literacy, and applications of information technology in higher education.
Practitioners indicate that IS research is not relevant to their needs. This has led to calls from within the IS community for research that is more relevant. Within the context of these concerns, this paper initially provides a market analysis of the competitive strengths and weaknesses of academia vis a vis IT market research firms and units. The analysis indicates four categories of research where academia has a relative advantage: issues contrary to commercial interests, unsolved problems, issues economically unattractive to commercial researchers, and research on IS teaching.
The paper also addresses the issues of technical knowledge and skills in IS academia, and publication outlets, as they relate to relevance. The analysis indicates that academia should provide incentives for acquisition of technical skills, especially in the form of certifications where available. It also indicates that publication of high quality research in practitioner-oriented outlets should receive recognition comparable to publication in traditional academic journals.
ISRL Categories: I, IB, IA, IB04, IB03, IB06, IA02
How relevant should IS research be to practitioners? This question is becoming a "hot button issue." Several prominent IS academics recently expressed their opinions on the subject in leading journals. The author posted an e-mail on this to the ISWorld Net list server in July of 1998 and was pleasantly surprised by the number of responses from well-known figures within two days. Several respondents expressed their opinions rather ardently; for example, see the Appendix for a thoughtful posting from a first-year professor.
The issue is attaining the status of a debate but fortunately, although the feelings appear strong, up to this point it has remained civil. The positions tend to be implicit, but the camps appear to be lining up along the following division:
Because practitioners have expressed concerns about this issue, the paper will first argue for the importance of relevance in research. The latter part of the paper argues that the technology-driven aspects of the IS field make it different from other academic fields. Therefore to increase relevance to practitioners and students, faculty evaluation systems needs to provide increased incentives for acquisition and maintenance of technical skills.
As an applied field, we need financial support and institutional validation from practitioners. And we have received tremendous amounts of it in the past. The university where the author recently completed a Ph.D. was one of 13 schools that each received a $2,000,000 grant from IBM in the mid-1980’s. Tallies of the academic versus industry affiliations of names listed inside the front covers of early versus recent editions of MIS Quarterly suggest relatively more industry involvement in the early years. For example, the March 1978 issue (Volume 2) had two consulting editors and an associate editor from industry, within a relatively small list of editorial positions.
Unfortunately many practitioners now feel that IS research is largely irrelevant; for example:
Keene’s paper (1980, as quoted in Alavi et al. 1989) for the first ICIS conference stated that "since computers are important and knowledge of how to use them limited, academics have been given a line of credit to draw on." We need to be very aware of who is extending this line of credit, and quite concerned lest they (not just industry, but the larger society and the students it sends us) withdraw it.
The following items represent some salient recent contributions to the debate, and provide additional references.
The December 1996 issue of Information Systems Research had two Research Commentaries concerning "diversity" in IS research:
Some statements in the papers cited above indicate that we are attempting to enter (actually reenter) a new market, with an industrial rather than academic audience. For example, Benbasat and Zmud (forthcoming) explicitly identify industry consumers for relevant research. However to be most effective in this new endeavor, we should do a market analysis: who are our competitors, and what are our strengths and weaknesses vis a vis their strengths and weaknesses?
The Forresters and Gartners of the IT world, and the R & D units of major hardware and software companies, have resources that, in comparison to ours, are virtually unlimited. They can complete and circulate their studies faster than we can do a preliminary design. For example, in March 1994 Brancheau et al. (1996) mailed the first round of a Delphi study on information systems management key issues. They reported their findings in the June 1996 issue of MIS Quarterly. In contrast, Computer Sciences Corporation’s (CSC 1997) annual critical issues survey has a turn-around time of approximately six months from fielding the survey to publication.
To povide additional context on the timing issue, consider personal computer systems as a rough indicator of the pace of change in information technology in this same interval. The most expensive desktop system in Dell’s back cover advertisement in the March 15, 1994 issue of PC Magazine ran at 60 MHz, had 16 MB of RAM, a 1 GB hard drive, a 2x CD-ROM drive, and listed at $4,699. Dell’s ad in the June 11, 1996 issue offered a high-end system for $3,599 that ran at 200 MHz, with 32 MB of EDO RAM, a 2.5 GB hard drive, and an 8x CD-ROM drive.
The concept of an "Internet year" provides another perspective on the rapid rate of change in our field. Things develop so rapidly on the Internet, especially in its World Wide Web aspect, that a period of several months appears comparable to a full year in other areas of the economy.
This is not in any way intended to disparage the notable contributions of James Brancheau and the co-authors of the study cited above to the development of the academic field of IS, or the quality of the efforts of the editors and reviewers at MIS Quarterly. On the other hand it does suggest that, if we want our work to be relevant to practitioners, there are major limitations on the types of research we can do. And these limitations are even more pronounced if the results of the research are primarily disseminated via journals operating under the traditional academic publishing model.
Because of their resources, we generally can not compete with commercial IT market research firms, and captive IT research units, on topics that they are actively researching. On the other hand, they have to generate income in excess of expenses and maintain long-term relationships with their clients, which sets limits on the topics on which they will work.
All is not lost, however. Consider the following characteristics of academia, and their implications:
Considering these relative strengths and weaknesses, this paper identifies four categories of topics that are relevant to our external constituencies but may not be practical for Gartner et al., or internal R & D units in the hardware and software companies. This paper asserts that academia can make a unique contribution with regard to the following types of topics:
Issues Contrary to Commercial Interests
The IT industries produce a very wide range of products and services. Therefore there are many opportunities for IS academics to do research of interest to practitioners--on the effectiveness of these products and services. Our status as an objective third-party, without a financial interest in the outcomes, makes our research even more relevant. This type of study corresponds to "evaluation research, which is strongly established in professional schools of education, social work, and public health" (Robey and Markus 1998).
Benbasat and Zmud (forthcoming) suggest that IT research will be more relevant if presented so that practitioners can use it to justify IT initiatives and actions. We may be able to provide an even more valuable service to industry and society, by doing research that may expose problems and thus "unjustify" certain technologies or practices that generate substantial revenues for vendors and consultants. Those who are benefiting are unlikely to hire commercial market research firms to study problems of which their customers are not generally aware. If commercial firms do find such problems in the course of other studies, the results will not see the light of day except when solutions are developed and the problems can be identified in competitive advertising.
In contrast to the long-term approach that is acceptable for unsolved problems (next category), for this type of issue we need to be relatively nimble. However our field can point to examples of timely research in this category:
The commercial world recognizes our institutional credibility and that our duty is to the larger society, and is not in a position to effectively criticize competently executed research. Organizations may withhold funding and cooperation on research counter to their interests, but losses to them usually represent gains to other potential sponsors.
Unsolved/Seemingly Intractable Large Problems
These are issues with potentially major economic and/or societal impacts, but which still remain open in spite of extensive research over a long time. Commercial firms lose interest because of the complexity and low probability of a near-term payoff. They may continue to work sporadically on the problems, but do not come up with any real breakthroughs.
In contrast, academics remain interested because of the challenge of the problems, and because the economic and societal ramifications offer a hope of glory if we can make substantial progress. Academia recognizes the value of a good effort on a complex problem and does provide rewards--i.e., publications--even if the research does not produce solutions. Since the problems have been unsolved for a substantial period, it generally does not matter that we work slowly on them.
Rigor: These types of problems also represent very appropriate targets for the sophisticated theories, tools, and techniques associated with academic rigor. Some in industry do not recognize the value of our seemingly esoteric approaches. If this type of rigor is truly worthwhile, what better way to prove it than to make progress on unsolved problems that are economically or socially important? This would be far better than applying such sophistication to problems that are trivial, or where the results appear "intuitively obvious" (Kavan 1998, p. 20).
Implicitly one of the rationales for using complex tools or techniques on small problems is to pilot their use. Reviewers should reject this rationale except in situations where there is an opportunity to "beat our competitors (journals in other fields) to market" with the initial demonstration of the value of the approach. Otherwise they should require that researchers apply such approaches to worthy problems. Similarly, consistent with editorial decisions at MISQ in the mid-1990s, reviewers should reject research whose "primary objective is to achieve, through the application of sophisticated methods, yet another minor improvement in some instrument that has already undergone multiple tests" (Benbasat and Weber 1996).
Hitt and Brynjolfsson (1996) use statistical methods related to economic theory to address the long-standing IT "productivity paradox." Although their methodologies may be inaccessible to some practitioners, the issue certainly qualifies as a major problem. One of their findings may raise more questions for practitioners than it resolves. They report that even with "capital costs ... as high as 69 percent per year, we rejected the hypothesis that the net return to IT Stock was zero" (p. 131). However any practitioner who reads the article closely enough to notice this item would probably be sophisticated enough to view it in the context of the other analyses, and also recognize that academic research often leaves "loose ends" for future research.
Examples of Unsolved Problems
Issues Not Economically Attractive to Commercial Firms
We do our research on a part-time basis, but do have access to graduate student labor on very favorable terms. This leads to the following types of opportunities:
CIGNA Corp. recently sponsored studies at the Comparison and Evaluation Laboratory at Temple University. Student teams did in-depth evaluations as part of a course or for independent study credit, under the guidance of a faculty advisor (Mandviwalla 1998).
For small, longer-horizon projects with uncertain outcomes, organizations may be willing to invest relatively modest amounts of money. We can offer access to the cutting edge concepts that might be necessary to achieve a break-through. Even if we do not generate a significant advance in knowledge, the cost is lower than doing it commercially. Sponsors recognize up front that they may end up writing-off their investment as support for the good causes of student education and academic research.
One of our major strengths is that we supply graduates with skills in the rapidly growing IS field. Regardless of whatever opinions academics in other areas harbor about us, it is hard for them to successfully argue with the increasing student demand for our courses, and our success in placing these students in good positions.
On the other hand, this paper questions whether we ourselves really understand the implications of the explosive growth in information technology. This growth makes IS fundamentally different from other fields that do not need to prepare their students to deal with content that is rapidly becoming displaced or devalued by newer technologies. We therefore need to do research on how to best prepare our students, and the organizations that hire them, to cope with this challenge. The following papers are examples of research on this unique aspect of our field:
Note also that we have a tremendous advantage over commercial researchers in this area. We work in academic settings, have access to undergraduate students as subjects and graduate students as assistants, and have many opportunities to test technologies and related procedures in teaching situations.
TECHNOLOGY AND THE IS ACADEMIC
As mentioned above, we may not fully recognize the implications for teaching of the explosive growth in information technology. The author also questions whether we have considered the implications in regard to how we evaluate faculty.
Like other faculty in a research university, we have teaching and service, we have to do research and publish, and we have to keep up with the scholarship in our field. Keeping up with the scholarship is complicated by the multitude of reference disciplines. Other fields have associations with reference disciplines--e.g., Psychology has a long-standing and fruitful involvement with some aspects of mathematical statistics. However the author suspects that very few fields are involved with as many reference disciplines as we are.
In addition to all of the above, we need to cope with the unique characteristics of IT. Unlike academics in other fields, we must keep up with the incredible pace of technological development (on a representative basis, with different members keeping up with different technologies). This is necessary for us to maintain our credibility with our external constituencies of industry and students, and for us to do research that is relevant.
Recognizing the importance of technology, a faculty member may invest additional time and effort on a continuing basis to keep up with a set of existing and new technologies. Everything else being equal, this person will not be able to put out as much research--which meets the standards of our reference disciplines--as a faculty member who does not keep up with technology. Which faculty member will have more credibility with industry? Which one will be more likely to produce research that practitioners can really use? Which one will be better able to prepare students to meet the challenges they will face in their careers? But which one will receive the greater rewards in the academic spoils system?
Industry has already faced a similar issue, and responded a long time ago by developing separate career "ladders" for technical employees (Brooks 1975). Some employees aspire to the excitement and challenge of management. Those with technical interests also have opportunities for advancement, by enhancing their skills while remaining on the technical side.
This paper recommends that our field should similarly offer a technical track, where maintaining and enhancing technology skills is a very important factor in the evaluation and advancement processes. This will give us an opportunity to enhance our stature among external constituents, by "practicing what we preach" about the importance of continuing education to practitioners. It will also provide knowledge and skills that will be useful for research on the application of technology to IS education.
The increasing prominence of certification programs (Novell’s CNE and Microsoft’s MCSE are well-publicized, but there are many others) provides a convenient vehicle for assessing technical competence when evaluating faculty members. The certification exams are detailed enough to assure that a person can not pass without substantial knowledge and skills in the subject. Including external certifications is consistent with the current practice of considering publishing records, which are also externally mediated, in academic evaluations.
Certifications can significantly increase our credibility with our external constituencies. Those who acquire them will have opportunities to teach certification courses to our students. They will also be able to provide up-to-date and relevant information to students in other courses, and to do research that is more relevant to practitioners. The certifications are widely accepted and well-respected in industry, and therefore provide additional opportunities to enhance our reputation among practitioners.
However exhortations to increase technical skills will not benefit our field unless we provide appropriate incentives. If we do not reward such efforts, we will experience a "brain drain." Technically competent faculty will leave academia for well-paying industry positions where they can do work that is very relevant to practice. And practitioners will continue to view our work as irrelevant to their concerns.
Any discussion of evaluation leads into the topic of publishing outlets. At present, publications in academic journals count more than in other outlets. These other outlets often have glossy covers, use color, include pictures and graphics (not just line drawings), take advertising (and not just for academic positions), and request that articles not include more than 12 references (if any). The editors of these publications are highly selective about what they publish (try submitting some of your good work to Communications of the ACM or Harvard Business Review).
The distinguishing characteristic of these outlets is that they emphasize interest to their audience and readability. People do read what they publish, in contrast to much of the content of academic journals. (In IS, we sometimes refer to "read only" access. There is a well-known quip about academic journals functioning in "write-only" mode [quoted without source in Denning 1996].) These other publications have substantial readerships, and their demographics include many IS professionals and managers who make decisions about IT. One of the reasons they read these publications is because they are looking for up to date information about issues they face in their work.
Considering what this paper has said up to this point, the recommendation is obvious. Academia needs to reward publication of high-quality material in non-academic outlets as well as publication in traditional outlets. The criterion should not be the type of outlet, but the quality and likely impact of the research. We need to stop using the academic stature of the outlet as a surrogate for the value of the content. If we can evaluate the quality and value of Ph.D. dissertations, we can do the same for faculty research published in non-traditional venues.
Increasing Involvement with Industry Contacts
This is a common theme in papers on relevance (Benbasat and Zmud, forthcoming; Robey and Markus 1998; Saunders 1998; Senn 1998). Ironically, we do not recognize people who have these contacts and are around us all the time. Schools are using more and more part-time lecturers, to reduce costs and sometimes to teach technical skills not available through the regular faculty. These people usually work in industry, often with access to newer technologies as well as current management issues and practices. We should explore ways to tap this resource of sometimes marginalized fellow professionals, so their knowledge and contacts can benefit faculty researchers as well as students.
Benbasat and Zmud (forthcoming) recommend entering into formal dialogues with practitioners, to identify their unsolved problems and the issues that they think will be important in 3-5 years (corresponding to the lead time for research and publication of results in academic outlets). These are both excellent ideas, although the latter is not a substitute for speeding up the academic publishing cycle. Hopefully the research and dissemination of results can be handled quickly enough to maximize the value to our community.
Some will raise the concern that the resulting "lists of approved topics" will be used to enforce a lock-step conformity in our field. Given the large and growing number of research outlets, diversity of interests, and ease of entry into publishing (Robey and Markus 1998), this threat appears unlikely to materialize.
To understand the types of research and reward systems most appropriate for our field, this paper suggests that we need to consider another issue besides our relative strengths and weaknesses. We also need to understand who we are, and the implications of our identity.
As a "thought experiment," consider the following digression. Neuroscientists find that people exhibit a negative brain wave peaking at around 400 milliseconds after exposure to word in a sentence that is semantically inconsistent with the preceding words. The magnitude of this "N400" wave is larger when the semantic incongruity is larger (Kutas and Hillyard 1980). Keeping in mind this background information, read the following statements:
In reality, IS is not one of the cultural foundations of world civilization, or even western civilization. People do not come to us to discover the meaning of life or the nature of being. Students come to us because we can provide skills that will help them get good jobs, pay off their student loans, and make a meaningful contribution to the development of the organizations that hire them. Industry is very happy to hire all the competent graduates we can turn out, and wishes we could supply more. If we can generate research that helps IS professionals do their work more effectively, so much the better.
This is not to say that our field is not intellectually respectable. As evidenced by the continuing high failure rates over many years, developing and successfully implementing large computer systems (at the extreme, the US air traffic control system, [Gibbs 1994]) is an extremely complex and challenging activity. Arguably, "Computer programs are mankind’s most elaborate artifacts." (Shore 1985, p. 209) Any assistance we can provide in reducing the failure rate, and associated impacts, will be a noteworthy contribution to human progress. (In this regard, Leon Kappelman’s [e.g., 1997] diligent efforts on the Y2K problem are certainly a credit to our community.)
When people go through adolescence, they often doubt their self-worth. They compare themselves to others and emphasize their relative shortcomings. They fail to recognize their own strengths, and question the value of their unique abilities. This is normal in adolescence, albeit painful, and can be beneficial if it helps individuals develop their own identities. On the other hand, it is sad to see--and possibly pathological--in a person who has reached the age of 30.
Our field has an established research tradition that Alavi et al. (1989) trace back to 1968. Therefore it is time for us to exorcise the specters that are haunting us, and leave adolescent angst behind. We are who we are. Our research, and the evaluation policies that drive it, should unashamedly reflect our place in the world.
Alavi, M., Carlson, P. and Brooks, G. "The Ecology of MIS Research: A Twenty Year Status Review," Proc. of the Tenth Annual Conference on Information Systems, International Conference on Information Systems, Boston, 1989.
Becker, F., PonTell, S., Gray, P., and Markus, M. L. WorkSmart: Gaining Competitive Advantage from Integrated Workplace Strategies, Center for the New West, Denver, Colorado, 1996.
Benbasat, I. and Weber, R. "Research Commentary: Rethinking Diversity in Information Systems Research," Information Systems Research (7), December 1996, pp. 389-399.
Benbasat, I. and Zmud, R. "Empirical Research in Information Systems: The Practice of Relevance," MIS Quarterly (forthcoming).
Brancheau, J., C., Janz, B. D., and Wetherbe, J. C., "Key Issues in Information Systems Management: 1994-95 SIM Delphi Results," MIS Quarterly (20), June 1996, pp. 225-242.
Brooks, F. P., Jr. The Mythical Man-Month: Essays on Software Engineering, Addison-Wesley Publishing Company, Reading, Massachusetts, 1975.
CSC. Critical Issues of Information Systems Management, Computer Sciences Corporation, El Segundo, California, 1997.
Davenport, T. "Storming the Ivory Tower," CIO (10), April 15, 1997, pp. 38-40 (accessible from http://www.cio.com/archive/041597_think_content.html).
Denning, P. J. "Electronic Publishing Plan a Must," Computing Research News (8), September 1996, p. 2 (accessible from http://www.cra.org/CRN/issues/9604.pdf).
Dennis, A. R., Nunamaker, J. F., Jr. and Vogel, D. "Comparison of Laboratory and Field Research in the Study of Electronic Meeting Systems," Journal of Management Information Systems (7), Winter 1991, pp. 107-135.
Edberg, D. T. and Bowman, B. J. "User-Developed Applications: An Empirical Study of Application Quality and Developer Productivity," Journal of Management Information Systems (13), Summer 1996, pp. 167-185.
Gibbs, W. W. "Software’s Chronic Crisis," Scientific American (271), September 1994, pp. 86-95.
Goodhue, D. L., Kirsch, L. J., Quillard, J. A., and Wybo, M. "Strategic Data Planning: Lessons from the Field," MIS Quarterly (16), March 1992, pp. 11-34.
Hitt, L. M. and Brynjolfsson, E. "Productivity, Business Profitability, and Consumer Surplus: Three Different Measures of Information Technology Value," MIS Quarterly (20), June 1996, pp. 121-142.
Ives, B. "Conclusions," The Information Systems Shuttle: A Proposal for Collaborative Learning and Research, ISWorld Net Virtual Meeting Center, May 4, 1998 (accessible from http://ww2.cis.temple.edu/isworld/vmc/April98/ives/index.htm).
Kappelman, L. A. Year 2000 Problem: Strategies and Solutions from the Fortune 100, International Thomson Computer Press, London, 1997.
Kavan, C. B. "Profit through Knowledge: The Application of Academic Research to Information Technology Organizations," Information Resources Management Journal (11), Winter 1998, pp. 17-22.
Keene, P. G. W. "MIS Research: Reference Disciplines and Cumulative Traditions," Proc. of the First International Conference on Information Systems, Philadelphia, 1980.
King, J. L. and Applegate, L. M., "Crisis in the Case Study Crisis: Marginal Diminishing Returns to Scale in the Quantitative-Qualitative Research Debate," Information Systems and Qualitative Research, in A.S. Lee, J. Liebenau and J. I. Gross (eds.), Chapman & Hall, 1997, pp. 28-30 (accessible from http://www.hbs.edu/applegate/cases/research/paper.html#patronage).
Kutas, M. and Hillyard, S. A. "Reading Senseless Sentences: Brain Potentials Reflect Semantic Incongruity," Science (207), January 11, 1980, pp. 203-205.
Lacity, M. C. and Hirschheim, R. Information Systems Outsourcing: Myths, Metaphors and Realities, John Wiley & Sons, Inc., New York, 1993.
Lacity, M. C. and Hirschheim, R. "The Information Systems Outsourcing Bandwagon," Sloan Management Review (35), Fall 1993, pp. 73-86.
Lederer, A. L. and Sethi, V. "The Implementation of Strategic Information Systems Planning Methodologies," MIS Quarterly (12), March 1988, pp. 445-461.
Mandviwalla, M. "Reports on Internet Development," e-mail posted to ISWorld Net List Server on June 2, 1998.
Mandviwalla, M. and Gray, P. "Is IS Research on GSS Relevant," Information Resources Management Journal (11), Winter 1998, pp. 29-37.
Robey, D. "Research Commentary: Diversity in Information Systems Research: Threat, Promise, and Responsibility," Information Systems Research (7), December 1996, pp. 400-408.
Robey, D. and Markus, M. L. "Beyond Rigor and Relevance: Producing Consumable Research about Information Systems," Information Resources Management Journal (11), Winter 1998, pp. 7-15.
Ross, J. A. "Spreadsheet Risk," Harvard Business Review (74), September-October 1996, pp. 10-12.
Saunders, C., "Editorial Preface: The Role of Business in IT Research," Information Resources Management Journal (11), Winter 1998, pp. 4-6.
Senn, J. "The Challenge of Relating IS Research to Practice," Information Resources Management Journal (11), Winter 1998, pp. 23-28.
Shore, J. The Sachertorte Algorithm and Other Antidotes to Computer Anxiety, Viking, New York, 1985.
Westfall, R. D. "Evaluation and Assimilation Skills As Key Knowledge Aspects Of Information Technology Literacy," National Research Council, Computer Science and Telecommunications Board Workshop on What Everyone Should Know about Information Technology, Irvine, California, 1998 (accessible from http://www.cyberg8t.com/westfalr/it_litrc.htm).
Westfall, R. D. Remote Work: The Demand for Telecommuting, unpublished doctoral dissertation, Claremont Graduate School, Claremont, California, 1997a (abstract accessible from http://www.cyberg8t.com/westfalr/dis_abst.html).
Westfall, R. D. "Using the Learning Needs Model in Introductory Information Systems Classes," in E. P. Robinson (Coordinator), Proc. Decision Sciences Institute, San Diego, California, 1997b (accessible from http://www.cyberg8t.com/westfalr/lrn_need.html).
APPENDIX: Response to ISWorld Net Posting on Relevancy Crisis in IS Research *
I am interested in your study of IS relevancy because I share similar
concerns as outlined in your section on SWOT analysis. Why does only IS
Research, for that matter all research in the various business school
disciplines, seem to suffer from such a relevancy crisis? As a Ph.D.
student, time and time again, I have struggled with this idea of relevancy
and the need for making a real contribution to the business world. The few
times I brought it up, I was either told that I didn't understand enough
to talk about it or that we were in the business of "knowledge for the
sake of knowledge."
While there is truth and merit in preserving academic freedom in
intellectual pursuits, we also need to ask ourselves the question: "Like
the medical sciences or for that matter any of the physical sciences, is
our discipline leading the way for practitioners?" The question was
answered (a few years ago) for me by a stalwart in this field: "The
practitioners are at the forefront in coming up with innovative business
practices and systems." So then, are we glorified reporters of such
business practices? I guess we are, except that we couch it in a language
that few understand and disseminate it in the name of theory building,
theory testing, etc.
I think it would be very interesting to conduct an empirical study that
examines the extent to which IS research has contributed to IS practice.
The big question that needs to be answered is: "In the name of theory
building and development, are we really making progress or are we going in
During the last five years (4 of which were in a Ph.D. program) of my
academic career, I must have read at least 500 articles across various
business disciplines. The contribution to my existing knowledge of
business practices was minimal to say the least. I read many superbly
crafted pieces that reflected the authors' command of the English language
as well as their command of the literature and their ability to generate
conceptual models. In other words, there was ample evidence of excellent
training in doing great research, but little evidence of any real
practical relevance of their work.
Practical relevance is a relative term and the argument might get us
nowhere as there is always a way to defend the relevance of one's work.
But the reality is that we don't seem to make an impact on IS practice. I
really hope that I am wrong but I am afraid that is not the case.
On a final note, to me publishing is not a game but is serious business.
At the end of it all, I would like my work to be of real value and not an
exercise in intellectual flirtation.
* respondent requested anonymity