A Conceptual Framework for Human AI Collaboration: Ontological and Epistemological Perspectives

Authors

  • Meyti Eka Apriyani Department of Electrical Engineering and Informatics, State University of Malang, 65145, Indonesia Author
  • Syaad Patmanthara Department of Electrical Engineering and Informatics, State University of Malang, 65145, Indonesia Author

DOI:

https://doi.org/10.51747/energy.si2025.251

Keywords:

Epistemology, Ontology, Human–AI Collaboration, Information Systems, Knowledge Co-Creation

Abstract

Collaboration between humans and artificial intelligence (AI) has become a pivotal phenomenon in the evolution of information systems, yet its philosophical foundations remain underexplored. This study develops an integrative conceptual framework that combines ontological and epistemological perspectives to examine how human–AI collaboration shapes knowledge creation and decision-making within sociotechnical contexts. The proposed framework identifies five ontological levels of AI agency and four epistemological processes underlying hybrid knowledge formation. It further integrates six interrelated dimensions—ontological, epistemological, technical, ethical, social, and organizational—that collectively define the dynamics of human–AI collaboration. The findings contribute to the theoretical discourse by introducing the constructs of quasi-epistemic entities and hybrid epistemology, which reconceptualize AI not merely as a computational artifact but as a participant in epistemic processes, thereby extending existing theories of distributed cognition and epistemic accountability beyond instrumental human–machine models. Practically, the framework informs the design of transparent, adaptive, and ethically aligned human–AI systems within information-intensive environments.

References

[1] Human-AI Co-Scholarship: Reframing AI as an Epistemic Contributor to Knowledge Creation. Social Science Research Network. DOI: 10.2139/ssrn.5310319. Diakses dari https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5310319.

[2] Constructivist Mixed Human-AI Approaches Overcome Epistemic Limitations of LLMs: A Cognitive Insight from Socio-Technical Research. AIS Electronic Library. Diakses dari https://aisel.aisnet.org/oisiworkshop2025/11/

[3] Symbiotic Epistemology: Quasi-Epistemological Entities and the Philosophy of Human-AI Cognitive Partnership. PhilPapers. Diakses dari https://philpapers.org/rec/KAPSEQ.

[4] Co-evolutionary Intelligence: Rethinking Human–AI Interaction. PhilPapers. Diakses dari https://philpapers.org/rec/MOCCIR.

[5] Jaakkola, E. (2020). Designing conceptual articles: four approaches. AMS Review, 10(1-2), 18-26.

[6] Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.

[7] Popay, J., et al. (2006). Guidance on the conduct of narrative synthesis in systematic reviews. ESRC Methods Programme, 15(1), 47-71.

[8] Williamson, T. (2020). Philosophical Method: A Very Short Introduction. Oxford University Press.

[9] Whetten, D. A. (1989). What constitutes a theoretical contribution? Academy of Management Review, 14(4), 490-495.

[10] Smirnov, A., & Ponomarev, A. (2023). Collaborative Decision Support with Ontology-Based Neuro-Symbolic Artificial Intelligence: Challenges and Conceptual Model. DOI: 10.1007/978-3-031-19620-1_6. Diakses dari https://scispace.com/papers/collaborative-decision-support-with-ontology-based-neuro-16zc7mby.

[11] Smirnov, A., & Ponomarev, A. (2023). Ontology-Based Explanations of Neural Networks for Collaborative Human-AI Decision Support Systems. DOI: 10.1007/978-3-031-43789-2_33. Diakses dari https://scispace.com/papers/ontology-based-explanations-of-neural-networks-for-2p0fqcit67

[12] Fabri, L., Häckel, B., & Oberländer, A. M. (2023). Disentangling Human-AI Hybrids. Business & Information Systems Engineering. DOI: 10.1007/s12599-023-00810-1. Diakses dari https://scispace.com/papers/disentangling-human-ai-hybrids-1z7exvwo.

[13] Patil, S. R., Sharma, S., & Mahdavi, M. (2025). From Loop to Partnership: A Framework for Understanding the Evolving Paradigms of Human-AI Collaboration. DOI: 10.22541/au.175647875.57157328/v1. Diakses dari https://scispace.com/papers/from-loop-to-partnership-a-framework-for-understanding-the-9earzrva1o16.

[14] Seeber, I., et al. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174.

[15] Bansal, G., et al. (2021). Does the whole exceed its parts? The effect of AI explanations on complementary team performance. CHI Conference on Human Factors in Computing Systems, 1-16.

[16] Dellermann, D., et al. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637-643.

[17] Adhnouss, F., El-Asfour, H., & McIsaac, K. (2023). A Hybrid Approach to Representing Shared Conceptualization in Decentralized AI Systems: Integrating Epistemology, Ontology, and Epistemic Logic. AppliedMath. DOI: 10.3390/appliedmath3030032. Diakses dari https://scispace.com/papers/a-hybrid-approach-to-representing-shared-conceptualization-2bj5bubne1

[18] Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization Science, 33(1), 126-148.

[19] Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.

[20] Younas, A. (2024). A Philosophical Inquiry into AI-Inclusive Epistemology. Social Science Research Network. DOI: 10.2139/ssrn.4822881. Diakses dari https://scispace.com/papers/a-philosophical-inquiry-into-ai-inclusive-epistemology-5319yp4afn

[21] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

[22] Wilder, B., Horvitz, E., & Kamar, E. (2021). Learning to complement humans. International Joint Conference on Artificial Intelligence, 1526-1533.

[23] Lai, V., & Tan, C. (2019). On human predictions with explanations and predictions of machine learning models: A case study on deception detection. FAT Conference*, 29-38.*

[24] Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.

[25] Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051-2068.

[26] Ntoutsi, E., et al. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), e1356.

[27] Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121-127.

[28] Abbass, H. A. (2019). Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation, 11(2), 159-171.

[29] Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

[30] Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192-210.

[31] Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4), 9.

[32] Amershi, S., et al. (2019). Guidelines for human-AI interaction. CHI Conference on Human Factors in Computing Systems, 1-13.

[33] Mingers, J., & Willcocks, L. (2017). An integrative semiotic methodology for IS research. Information and Organization, 27(1), 17-36.

[34] Design Principles for Human-AI Collaborative Knowledge Service Systems. DOI: 10.1007/978-3-031-95901-1_1. Diakses dari https://link.springer.com/chapter/10.1007/978-3-031-95901-1_1

[35] Kahr, P., et al. (2023). Transparent machine learning in hospitality: Designing explanation types for hotel revenue management forecasting. International Journal of Hospitality Management, 108, 103357.

[36] Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer.

[37] Burton, J. W., et al. (2020). How to integrate artificial intelligence into the classroom. Nature Machine Intelligence, 2(11), 631-636.

[38] Smirnov, A., & Ponomarev, A. (2022). Collaborative Decision Support with Ontology-Based Neuro-Symbolic Artificial Intelligence. DOI: 10.1007/978-3-031-19620-1_6

[39] Cath, C. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A, 376(2133), 20180080.

[40] Grover, V., & Lyytinen, K. (2015). New state of play in information systems research: The push to the edges. MIS Quarterly, 39(2), 271-296.

[41] Faraj, S., et al. (2018). Working on and with algorithmic systems. Information and Organization, 28(1), 62-70.

[42] Rai, A., Constantinides, P., & Sarker, S. (2019). Editor's comments: Next-generation digital platforms. MIS Quarterly, 43(1), iii-ix.

[43] Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329-335.

[44] Cummings, M. L. (2006). Integrating ethics in design through the value-sensitive design approach. Science and Engineering Ethics, 12(4), 701-715.

Downloads

Published

2025-12-30

How to Cite

A Conceptual Framework for Human AI Collaboration: Ontological and Epistemological Perspectives. (2025). ENERGY: JURNAL ILMIAH ILMU-ILMU TEKNIK, 345-361. https://doi.org/10.51747/energy.si2025.251