copyright notice
link to the published version: in IEEE Computer, March, 2025; archive copy


accesses since April 23, 2025

CYBERDIDACTICISM: The New Epistemic Paradigm for cognitive minimalism and GenerativeAI

Hal Berghel

ABSTRACT: We postulate that GenerativeAI provides a new paradigm for disruptive technologies by enabling a community of cyberdidacts.

 

 

Educators have always been fascinated by, and enamored of, autodidacts. There's just something inherently uplifting about individuals who can master subjects on their own. For bright, passionate, self-motivated students driven by insatiable curiosity, autodidacticism is an ideal complement to formal education. It might be an adequate replacement for traditional education in such cases were it not for the fact that its viability is highly dependent on so many external factors: environment, social circumstance, access to resources, opportunities, individual personality, genetics, etc. Further a pseudo-autodidacticism in the hands of the parochial and illiberal can quickly be driven off the rails by ideological biases and prejudice. So while autodidacticism may not be an optimal learning environment for many if not most students, it is optimal for some students, and refreshing for a teacher to witness.

With autodidacticism, a teacher is primarily a facilitator - someone that identifies and provides access to resources, identifies alternative educational pathways, makes recommendations based on experience, and, above all, avoids impeding the student's progress. In this sense, the teacher is somewhat akin to a crew coxswain: useful for direction, but accounts for little of the expended effort.

LEARNING AND PERSONALITY

Psychological models of human personality identify at least a half-dozen or so primary traits within human personality inventories. The Big Five [1][ 2] and Revised NEO [3] models list some variation of these five traits: conscientiousness, agreeableness, extroversion/introversion, openness to experience, and emotional stability, while other models add to this list (e.g., the HEXACO model adds a sixth: honesty-humility [4]). Psychologists have been exploring the relationship between personality traits other human characteristics for some time. Of particular interest here is the relationship between personality traits and personal values [5] and between personality traits and academic performance [6].

Albert Bandura's notion of self-efficacy is a pivotal concept in this regard. [7] Bandura considers self-efficacy to be an individual's confidence in their ability to successfully complete a task. We note that self-efficacy is a perception or feeling that is experientially acquired by individuals, and thus both positively and negatively reinforced by actual successes and failures. Self-efficacy is both transferable to similar situations and also generalizable to new situations that are different from those already experienced. Self-efficacy may also be vicarious based on the observation of others. On Bandura's account, over time an increase (decrease) of self-efficacy produces a confidence (apprehension) when faced with new challenges. But, as Bandura cautions, “ analysis of how perceived self-efficacy influences performance is not meant to imply that expectation is the sole determinant of behavior. Expectation alone will not produce desired performance if the component capabilities are lacking .” (italics added) [7] Hold that thought. We will return to this topic, below, when we show how harmful it is when inflated self-efficacy becomes a surrogate for critical capabilities as reasoning proficiency, knowledge, and understanding leading to a deluded self-efficacy.

SELF-EFFICACY AND INTERACTION

Bandura's explanation that self-efficacy is a function of “experienced mastery” places it squarely within the scope of informatics [8]– the discipline that Robin Milner calls the science of interactive systems, and which many consider to be the nexus of technology, domain knowledge, and people. [9] That is, the process by means of which self-efficacy is achieved, the experienced mastery if you will, circumscribes a general-purpose, interactive learning system with multi-sourced and many-directional information stimuli, memory, a cognitive framework, feedback mechanisms, recognizers and analyzers of verbal and non-verbal patterns, and so forth. The is what Milner calls ‘conceptual armoury'. There is an analogy between the acquisition of self-efficacy and what computer scientists call interactivity. [10] We may draw parallels between psychology and computer science descriptors as in such pairings as individuals/objects, stimuli/input, response/output, thoughts/processes, behavior/outcome, and so forth, as functionally similar pairs of elements that comprise complex systems that process and react to symbolic information in different domains. There is also a parallel between what psychologists call observational learning and what computer scientists call interactive computing. And strong cases can be made that both are non-algorithmic since they may involve external, dynamic, interactive or reactive events that take place concurrently with, but independent of, any ongoing processing. [10] [11] Interactivity worthy of the name must accommodate inherently unpredictable responses to unanticipated external stimuli which is governed by possibly incomprehensible (at least, at the time) external influences. Letting a toddler play with a cell phone or mobile device or a letting a blindfolded child drive a car are two primitive illustrations of the potentially unpredictable, non-algorithmic nature of interactivity. Interactivity is a property of a truly open system. Human cognition is such a system: constrained in some ways, goal-directed and motivated in others, but nonetheless always open to new and unforeseen cognitive threads.

EFFICACY AND OUTCOME EXPECTANCY

Bandura draws an important distinction between outcome expectancy and efficacy expectation.

An outcome expectancy is defined as a person's estimate that a given behavior will lead to certain outcomes. An efficacy expectation is the conviction that one can successfully execute the behavior required to produce the outcomes. Outcome and efficacy expectations are differentiated, because individuals can believe that a particular course of action will produce certain outcomes, but if they entertain serious doubts about whether they can perform the necessary activities such information does not influence their behavior. [7]

This difference is subtle, but critical to the hypothesis we will soon advance. Note that an irrational inflation of efficacy expectation may have undesirable social consequences, perhaps by over-confident bridge designers (e.g., Tacoma Narrows Bridge), the construction of poorly thought through irrigation canals (e.g., Salton Sea), the circumvention of FDA guidelines in the use of dangerous pharmaceuticals (e.g., thalidomide), failure to anticipate that some metals can rust and may not withstand heat (e.g., Takata airbags), that blowout preventers may not work well under high pressure (BP Deepwater Horizon Oil Spill), the failure to admit that saying that a medical technology will work won't make it so (e.g., Theranos), and so forth. Examples such as these led me to propose Gresham's Twist on Moore's Law: the world's capacity to create absurd technology doubles every 18 months. [12]

I'm endorsing what I consider to be a modest and uncontroversial claim: unjustifiably high efficacy expectations can have dangerous social consequences and justify continued vigilance. Further, the potential for danger is proportional to the lack of justification. For the sake of simplicity, and given that we're not conducting a research study in the social sciences, we may place my endorsement into more familiar, if pedestrian, terms: delusional over-confidence is undesirable and should be avoided. In fact, a healthy skepticism is always warranted – especially when it comes to technology. [13] Further, any technology that facilitates or encourages delusional over-confidence is prima facie objectionable and its use should be discouraged.

CYBERDIDACTICA

I'm suggesting that unbridled over-confidence is likely undesirable, and shouldn't be encouraged without strong reservation. The widespread popularity of the “fake it ‘til you make it” and “move fast and break things” aphorisms have to be taken with a large grain of salt: they have limited utility and, as time has shown, are all too often coincident with negative externalities. These aphorisms are serviceable components of a ‘feel good' approach to management: while they may upload the spirit and make the participants feel good about themselves and their activities, their vagueness is quickly seen to masquerade intellectual confusion or camouflage a technological immaturity.

From my perspective as an educator, there is substantial anecdotal evidence that GenerativeAI falls within the scope of these aphorisms. In terms or the discussion above, it unjustifiably inflates the “efficacy expectations” of typical users. This anecdotal evidence derives in part from the observed disparity between GenerativeAI produced homework and programming assignments on the one hand, and exam scores and student interviews on the other – a level of disparity that was not observed to the same degree before GenerativeAI use became commonplace in higher education. Of course an anecdotal correlation is by no means proof of causation, but it does suggest a worthy topic for further study by social scientists. My intuition as a teacher tells me that a study somewhat analogous to the work of Bandura will reveal a strong connection between the reliance on GenerativeAI and sundry behavioral affectations such as inflated efficacy expectations, unjustified self-confidence, over-reliance on the volume of output, sub-optimal decision making, etc. That said, it is my intention here to explain the basis for my intuition as an educational observer, and not a social scientist. I've observed the emergence of a new class of student, the cyberdidact, which for all intents and purposes may be considered an antithesis of the time-honored autodidact. It may be useful to draw some comparisons between the two.

The autodidact derives considerable satisfaction from an ability to solve problems, achieve understanding, acquire mastery, etc., by themselves. Not in isolation, mind you, for inspiration is drawn from a variety of their own experiences; but without any formal instruction, motivation or direction by others. To be sure, such self-learning is not without risk and not to be recommended for everyone. But when it works, autodidactism can avoid inefficiencies and distractions in traditional, compulsory mass education and may lead to remarkable results.

By contrast, a cyberdidact has a consumer-based, transactional approach to learning and problem-solving, and only a casual, incurious interest in understanding and mastery. On the cyberdidact's account, there is nothing particularly satisfying in the personal quest for knowledge, but only in the apparent production of serviceable output. Indeed, that is the allure of GenerativeAI: it provides an epiphanic-like endorphin rush with minimal cognitive investment. In this way, it is akin to interactive video games– but with the additional advantage of requiring less continuous interaction in order to achieve satisfying results. Armed with queries like ‘how many Rs are in strawberry' [14] or instructions like ‘write a Python program to find prime factors for a set of integers,' the cyberdidact's cognitive investment is complete – irrespective, mind you, of whether they fully comprehend the significance of the queries. To illustrate, what does the ‘strawberry' query and response tell user-typist about the role of tokenization in large language models, the discordance between phonology and orthography, or the difference between orthography and semantics? How much of an understanding about number theory and factorization is required to create the program directive? In traditional intelligence, curiosity is the starting point of a creative process. With GenerativeAI, curiosity is the end of the process.

TYPUS ERGO SCIO

An infatuation of GenerativeAI lies in the superficial appeal of the end product embellished by the most cherished companions of a cognitive miser: immediate gratification, intellectual economy, and Immediate gratification. But this intellectual parsimoniousness comes at a price. By deferring the majority of the cognitive heavy lifting to the GenerativeAI tool, the user skirts the most fundamental components of metacognition: introspection, contextualization, reflection, reasoning, and the like. The ancient Greeks would describe GenerativeAI as nous -less. What is more, this nous-lessness provides a fertile breeding ground for the propagation of cognitive biases, selective perception, cognitive dissonance, conspiracy theories, fake news, alternative facts, and sundry the other pitfalls of inattentive and unprepared minds. The delusion behind the use of GenerativeAI may be expressed by this corruption of Descartes' dictum: typus ergo scio (I type therefore I understand). With many audiences, the appeal of GenerativeAI at this point seems to be presentation and optics over understanding and substance. GenerativeAI is more of a digital dilettante than an online oracle.

Because reasoning involves more than information retrieval, pattern recognition, and reaction, cognitive frugality carries with it a heavy cost. It understates the critical relationship of consciousness, understanding, and formal and informal logic to cognition, and completely ignores the role of self-correction, self-analysis and self-criticism. A first principle of cognition is recognizing the substance and significance of an event? This requires more of us that the ability to produce an executable query. In very narrowly focused applications where such considerations are ancillary, such as may arise in automated theorem proving, calculation, pattern recognition, information retrieval, etc. GenerativeAI is likely to be of considerable assistance to a scholar. But it is no substitute for human cognition: it may help in performing calculation, but it remains silent on why a calculation is important in the first place.

THE CYBERDIDACTIC HYPOTHESIS AND THE ONLINE DOPPLEGANGER THOUGHT EXPERIMENT

We suggest the following hypothesis in light of our observations.

The Cyberdidact Hypothesis: To the extent that it makes sense to correlate personality traits with academic performance, academic performance will not correlate with frequent use of, or reliance on, GenerativeAI.

Potential corollaries: (1) those personality traits that correlate positively with cyberdidacticism are likely to correlate negatively with autodidacticism, v.v.; (2) the appeal of GenerativeAI is inversely related to erudition; (3) GenerativeAI is likely to lead to an unjustified, elevated self-efficacy; and (4) GenerativeAI as a learning tool is demonstrably sub-optimal. Why might this be?

We begin with Bandura's cautionary observation that “ Expectation alone will not produce desired performance if the component capabilities are lacking.” Self-efficacy is not a sufficient condition for academic or scholarly ability. Self-deception may also be at work. Self-efficacy is conditioned by internal and external feedback. Were one to see that certain patterns of behavior continually return high marks on exams, positive recognition from knowledgeable, respected peers, continued success in the exercise of skills, etc. one might legitimately assume some degree of self-efficacy. But, can we imagine a situation where the continuous feedback might be misleading?

Indeed we can. Consider the case of a Loyal Online Doppleganger - a loyal, reliable, expert online surrogate who can be counted on as to take exams for you, interract with peers on your behalf, and perform your job – all via online communication systems where identity is electronically spoofed. Assume that the feedback on the Doppleganger's performance evaluations (in your name, of course) are consistently positive. But only you know of the existence of the Doppleganger, who, by assumption, will never disclose the ruse. Over time how would the consistent, positive assessment of the Doppleganger's performance effect your self-efficacy? Remember, that self-efficacy is conditioned by both internal and external feedback, but in this case all of the external feedback about your (i.e., the Doppleganger's) performance is strongly positive, but for the wrong reasons. My suggestion – which is confirmable or refutable by studies conducted by social scientists – is that self-delusion is an inevitable consequence, and that over time a person's self-efficacy will unjustifiably increase despite the ruse and that this false sense of accomplishment will lead an individual to over-confidence which will in turn lead them to taking on challenges for which they are not qualified. Our hypothesis predicts a provable connection between our Loyal Online Doppleganger thought experiment and actual use of GenerativeAI is obvious.

I further buttress my hypothesis by reference to the so-called “Big 5 Model” (aka OCEAN model) of personality traits of basic psychology. For present purposes, we'll use the definitions found on an online resource provided by the Harvard Graduate School of Education because it allows the online user to drill into arbitrary levels of detail and provides key references. [15]

  1. Conscientiousness – the tendency to be organized, responsible, and hardworking
  2. Agreeableness – the tendency to act in a cooperative, unselfish manner
  3. Neuroticism – Emotional stability is predictability and consistency in emotional reactions, with absence of rapid mood changes. Neuroticism is a chronic level of emotional instability and proneness to psychological distress
  4. Openness to Experience – the tendency to be open to new aesthetic, cultural, or intellectual experiences
  5. Extraversion – an orientation of one's interests and energies toward the outer world of people and things rather than the inner world of subjective experience; characterized by positive affect and sociability

Caveats are called for. First, models of personality types are instruments of social science not computer science, so my analysis represents an over-simplified discussion of the topic. Second, there is no universal agreement on which personality traits belong in the Big Five and what precise definitions should be used to describe them. Third, there is nothing that compels us to use the number 5 – social scientists have used as few as 2 and as many as 20. [16] Fourth, there are several different approaches to identifying relevant personality traits. I am neither a social scientist nor an expert on personality theory, but since I am advancing a hypothesis and not a proof, some brevity and occasional appeal to hand waving should be tolerable.

Social science research on the relationship between personality traits and self-efficacy has been conducted. In particular, the predictive powers of the Big 5 and self-efficacy on academic performance are well documented. The paragraphs, below, reports the observed relationship between the Big 5 personality traits on the one hand, and academic performance and self-efficacy one the other. [6]

Research shows that the Big Five traits relate to academic performance. Conscientiousness, i.e., self-discipline, facilitates schoolwork by imparting preparedness. Openness, i.e., imagination, helps with new modes of studying. Agreeableness, i.e., compliance, increases consistency of class attendance. Extraversion, i.e., sociability, hampers students' focus, and neuroticism, i.e., emotional instability, is associated with test anxiety, where both traits hinder performance. Empirical support for the predictiveness of some traits is stronger than for others. For instance, “Conscientiousness is the most robust predictor of academic performance with an average correlation of .20”…

Self-efficacy is correlated with academic performance .... A recent meta-analysis examined 50 antecedents of academic performance and found that self-efficacy had the strongest correlation (r = 0.59) ..... In the same study, of the Big Five traits, only conscientiousness significantly correlated with performance (r = 0.19). In another synthesis, which examined 105 predictors, self-efficacy was the second (after peer assessment) strongest predictor of academic achievement....

My hypothesis derives from my strong suspicion that social science research will show an inverse correlation between some of these Big 5 traits and eagerness to rely on GenerativeAI for academic and scholarly pursuits. I challenge the reader to consider for themselves the degree to which personality traits such as conscientiousness, self-discipline, imagination, consistency of class attendance, etc. would correlate with the reliance of GenerativeAI for scholarly insight. Similarly, one might consider whether unjustifiably high self-efficacy is likely to lead to quality academic or scholarly work. I can see how it might lead to increased productivity (via automation), but productivity in isolation is not a reliable indicator of the accuracy, value, and impact of scholarship. Particularly worrisome is the reliance of GenerativeAI for the creation of programming source code – especially when used in critical systems. In fact, one would expect that more reliable contributors to quality academic or scholarly work might be a climate of self-doubt, skepticism, agnosticism, and aporia.

That said, at this point our hypothesis should be understood within the framework of technology education, rather than social science research. From what I can tell, most post-secondary educators that I work with agree that this hypothesis is consistent with observation in the classroom. However, social science research place much higher demands on hypothesis validation than observation and anecdotage. It remains to be seen whether this hypothesis will receive validation in that realm.

From a computing perspective, GenerativeAI is algorithmic; thinking is not. There is a dimension of human thought that is inherently non-linear, dynamic, and interactive. Peter Wegner makes the point that interactive computation is non-algorithmic convincingly in several papers, [10][17] and one key element of his argument is that algorithms cannot process disparate input information that was not anticipated in its design. In Wegner's words: [10]

The radical notion that interactive systems are more powerful problem-solving engines than algorithms is the basis for a new paradigm for computing technology built around the unifying concept of interaction…. The paradigm shift from algorithms to interaction is a consequence of converging changes in system architecture, software engineering, and human-computer interface technology….

What is more,

The irreducibility of interaction to algorithms enhances the intellectual legitimacy of computer science as a discipline distinct from mathematics and, by clarifying the nature of empirical models of computation, provides a technical rationale for calling computer science a science.

For additional detail, the reader is encouraged to Goldin, Smolka and Wegner. [11]

Wegner's argument implies that GenerativeAI platforms, as algorithmic implementations of large language model neural nets, will never achieve parity with human thought. Such being the case, the use of GenerativeAI algorithms can never prove to be an adequate substitute for human understanding.

CYBERDILETTANTISM

Again, our experience suggests that cyberdidacticism will hold out special appeal for cognitive misers characterized by lower academic standards, limited scholarly ability, unjustified over-confidence, indolence, etc. I emphasize once again that this does not imply that GenerativeAI is without scholarly utility. Certainly, its use to jog memory, maximize information uptake, detect plagiarisms and forgeries, check facts, search databases, review, debug and document program code, parsing, detect plagiarism and copyright violations and authorship patterns, image recognition, language translation, modeling, address learning challenges, etc. are widely acknowledged. And if its use were restricted to such a support role in traditional learning environments, potential downsides would be much shallower. However, when it is used as a surrogate for imagination, creativity, understanding, reasoning, etc. to create content, its overall social value comes into question. It is unfortunate that a large part of the appeal of GenerativeAI in higher education seems to be that it provides a path of least resistance in the quest for measurable output and meeting deadlines. As such, it is a natural complement to social media for those who prefer presentation to substance, opinion to fact, belief over certainty, approximation over accuracy, and are content work with derivative and questionable content and to resolve problems with a minimum of critical reflection.

If left unchecked, GenerativeAI cannot help but facilitate cyber-dilettantism for those who are so inclined. If the goal is simply to generate plausible, token output, there is little incentive to go beyond a superficial understanding of a topic. It is the nature of the beast. GenerativeAI output justifies at best a participation trophy for the user who's minimally involved in the game.

A similar point was made in a recent article in the Chronicle of Higher Education:

Shriram Krishnamurthi, a computer-science professor at Brown University, has noticed that as more high schools teach programming with wildly varying degrees of rigor, incoming students are increasingly showing up thinking they know more than they do. “There's this weird thing where they are very competent at patching together some things and producing graphs that look nice,” Krishnamurthi said, “but their understanding of what they did is pretty low.” (He added that he wasn't casting judgment on the individual winners at NeurIPS. “There has always been and will always be a sliver of students that are extraordinarily capable,” he acknowledged. Outside of NeurIPS, high scholers can pay companies a handsome fee [18] to co-author academic papers, a cottage industry that's widely criticized. [19]

Of course, so-called paper mills have marketed bogus scholarship online for decades. This service is not limited to students. In a recent article in Science, Jeffrey Brainard reported that even “ journals are awash in a rising tide of scientific manuscripts from paper mills - secretive businesses that allow researchers to pad their publication records by paying for fake papers or undeserved authorship.” [20] GenerativeAI is becoming integral to the paper mill supply chain – either by allowing users to bypass the paper mill, or allowing the paper mills to become more efficient. In either case, academic standards are undermined. In addition, the GenerativeAI “paper mill” can create the illusion that the user has actually accomplished something. But, i n the case of “paper mill”, there is no delusion about authorship. The purchaser knows full well that he/she has no cognitive investment in the effort. However, GenerativeAI enables self-delusion, for the actual “author” is a computer, the paper is presumed unique, the process is anonymous, and there is no financial transaction recorded to betray the deception. GenerativeAI is a form of scholarly chicanery on a desktop. Everyone with a computer and an internet connection becomes an immediate cyber-dilettante.

THE ERA OF THE CYBERSAVANT

GenerativeAI provides access to computing power that usually isn't available to the general population. That would be a social good were it not for the fact that GenerativeAI's appeal lies in the ability to use these platforms with

With the potential consequence of producing an unjustified self-efficacy. Therein lies the proverbial rub. Social scientists have studied the effect of inflated self-efficacy and overconfidence [6], but have not fully embraced the potential adverse of GenerativeAI in the mix. We only partially understand the social effects of such technology-inspired self-delusion. [21]

Further, an over-reliance of GenerativeAI is but one of a number of current unhealthy trends in education. Its effects must be understood in the context of a broad decline in reading, decline in foreign language programs, and scholarly materials becoming less appealing to a general audience. [22] Humanities, liberal arts, and a diversified, well-rounded education have always been threatening to illiberal autocrats, dictators, and demagogues who focus on the development of compliant subjects and obedient workforces rather than a community of free thinkers who continuously challenge the existing order. So GenerativeAI is a demagogues dream. If our hypothesis is correct, the use of GenerativeAI as a substitute for traditional scholarship is going to exacerbate many of our social-political ills. While society enjoys a very long history of deploying technology before fully understanding the negative externalities of its use, GenerativeAI is unique in its ubiquity, ease-of-use, political implications, and potential for social disruption.

REFERENCES

[1] L. Goldberg, An alternative “description of personality”: The Big-Five factor structure, Journal of Personality and Social Psychology, 59:6, pp. 1216-1229 (1990). (online: https://projects.ori.org/lrg/PDFs_papers/Goldberg.Big-Five-FactorsStructure.JPSP.1990.pdf)

[2] L. Goldberg, The Development of Markers for the Big Five Factor Structure, Psychological Assessment, 4(1), pp. 26-42. (1992). (DOI: https://doi.org/10.1037/1040-3590.4.1.26)

[3] R. McCrae. P. Costa, Jr., and T. Martin, The NEO–PI–3 The NEO–PI–3: A More Readable Revised NEO Personality Inventory, Journal of Personality Assessment, 84(3), pp. 261-270. (online: https://www.tandfonline.com/doi/epdf/10.1207/s15327752jpa8403_05?needAccess=true)

[4] M. Ashton, K. Lee, M. Perugini, P. Szarota, R. de Vries, L. Di Blas, K. Boies, and B. De Raad, (2004). "A Six-Factor Structure of Personality-Descriptive Adjectives: Solutions From Psycholexical Studies in Seven Languages". Journal of Personality and Social Psychology., 86 (2), pp. 356–366 (2004). (online: https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.86.2.356)

[5] S. Roccas, L. Sagiv, S. Schwartz, A. Knafo, The Big Five Personality Factors and Personal Values, Personality and Social Psychology Bulletin, 28:6, pp. 789-801. (2002) (online: https://journals.sagepub.com/doi/abs/10.1177/0146167202289008)

[6] A. Stajkovic, A. Bandura, E. Locke, D. Lee and K. Sergent, Test of three conceptual models of influence of the big five personality traits and self-efficacy on academic performance: A meta-analytic path-analysis, Journal Title: Personality and Individual Differences, V. 120, pp. 238-245. (2018). (online: https://www.sciencedirect.com/science/article/pii/S0191886917305068/pdfft?md5=4fbccb30dd227e0a39f3592331685f60&pid=1-s2.0-S0191886917305068-main.pdf)

[7] A. Bandura, Self-efficacy: Toward a unifying theory of behavioral change, Psychological Review, 84(2), pp. 191-215. (1977). (online: https://psycnet.apa.org/doiLanding?doi=10.1037%2F0033-295X.84.2.191)

[8] Turing, Computing and Communication, in [GOLDIN], op cit..

[9] H. Berghel, Disinformatics: The Discipline behind Grand Deceptions Computer, vol. 51, no. 1, pp. 89-93, (2018) doi: 10.1109/MC.2018.1151023 (Online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8268033)

[10] P. Wegner, Why Interaction is More Powerful than Algorithms, Communications of the ACM, 40:5, [pp. 80-91 (1997). (online: https://dl.acm.org/doi/pdf/10.1145/253769.253801)

[11] D. Goldin, S. Smolka, P. Wegner (eds), Interactive Computation: The New Paradigm, Springer, New York, 2006.

[12] H. Berghel, Technology Abuse and the Velocity of Innovation, Cutter IT Journal, 28:7, pp. 12-17 (2015). (online: https://www.cutter.com/article/technology-abuse-and-velocity-innovation-487516)

[13] J. L. King, H. Berghel, P. G. Armour and R. N. Charette, Healthy Skepticism, Computer, 57:11, pp. 86-91, Nov. 2024. (online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10718683)

[14] A. Silberling, Why AI can’t spell ‘strawberry’, techcrunch, 08/27/2024. (online: https://techcrunch.com/2024/08/27/why-ai-cant-spell-strawberry/)

[15] The Big Five Personality Traits, Online resource, Social and Emotional Learning Lab, Graduate School of Education, Harvard University. (Online: http://exploresel.gse.harvard.edu/frameworks/7)

[16] O. John, L. Naumann, & C. Soto, Paradigm shift to the integrative Big Five taxonomy: History, measurement, and conceptual issues. In O. John, R. Robins, & L. Pervin (Eds.), Handbook of personality: Theory and research – 3rd ed., Guilford Press, New York, pp. 114-158. (2008) (online preprint: https://www.colby.edu/wp-content/uploads/2019/06/John_et_al_2008.pdf)

[17] P. Wegner, Interactive Foundations of Computing, Theoretical Computer Science, 192:2, pp. 315-351. (1998). (online: https://www.sciencedirect.com/science/article/pii/S0304397597001540?via%3Dihub)

[18 D. Golden and K. Purohit, The Newest College Admissions Ploy: Paying to Make Your Teen a “Peer-Reviewed” Author, ProPublica, May 18, 2023. (online: https://www.propublica.org/article/college-high-school-research-peer-review-publications)

[19} S. Lee, Teens are Doing AI Research Now. Is That a Good Thing?, The Chronicle of Higher Education, January 14, 2025. (online: https://www.chronicle.com/article/teens-are-doing-ai-research-now-is-that-a-good-thing?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_12306631_nl_Academe-Today_date_20250115)

[20] J. Brainard, Fake scientific papers are alarmingly common, Science, 9 May 2023. (online: https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common)

[21] H. Berghel, Generative Artificial Intelligence, Semantic Entropy, and the Big Sort, Computer, 57:1, pp. 130-135. (2024) (doi: 10.1109/MC.2023.3331594) (online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10380248)

[22] S. Carlson and N. Laff, The Hidden Utility of the Liberal Arts, The Chronicle of Higher Education, January 21, 2025. (online: https://www.chronicle.com/article/the-hidden-utility-of-the-liberal-arts?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_12412980_nl_Academe-Today_date_20250127)


Also see Esther Shein, The Impact of AI on Computer Science Education, CACM, July, 2024.