Papers by Srinivasa Rao Kolusu

REST Publisher, 2023
Explainable in Artificial Intelligence (AI) is the ability to comprehend and explain how AI model... more Explainable in Artificial Intelligence (AI) is the ability to comprehend and explain how AI models
generate judgments or predictions. The complexity of AI systems, especially machine learning models, is
increasing. understanding their reasoning process becomes crucial for ensuring trust, fairness, and
accountability. Explainable AI (XAI) helps demystify the "black box" character of sophisticated models, Deep
neural networks, for example, which allows users to to grasp how inputs are transformed into outputs. In many
AI system judgments can have a big impact on industries including healthcare, banking, and law making
transparency a necessity. Explainable also aids in identifying and mitigating biases, improving model
performance, and complying with regulatory requirements. As AI technologies evolve, there is an increasing
emphasis on balancing model accuracy with interpretability, making some AI systems remain ethical,
transparent, and in line with human values. In artificial intelligence (AI) research, Explainable is essential for
fostering confidence, guaranteeing responsibility, and enhancing The openness of artificial intelligence
systems. As Artificial intelligence models, especially intricate ones like deep learning, become more widely
adopted, understanding their Processes for making decisions are crucial for validating their outcomes. The
goal of explainable AI (XAI) research is to create models interpretable so that users can comprehend the
decision-making process. This is particularly crucial in high-stakes industries like healthcare, banking, and
law, where poor or prejudiced choices can have serious repercussions. Explainable also supports regulatory
compliance, model improvement, and ethical AI deployment. An approach to decision-making known as
TOPSIS (Technique for Order of Preference by Similarity to Ideal Answer) evaluates how far an alternative
is from the worst-case situation and how close it is to the ideal solution. The worst-case solution shows the
lowest values, while the ideal solution shows the best values given the desired criteria. Each alternative is
given a similarity score by TOPSIS, which ranks them according to how near the ideal answer they are. This
method is frequently used to enhance decision-making in a variety of domains, including business, engineering,
environmental research, and healthcare. Alternative: LIME (Local Interpretable Model), SHAP (Shapley
Additive Explanations), Deep LIFT (Deep Learning Important Features), Anchor Explanations, ICE
(Individual Conditional Expectation), Counterfactual Explanations, Rule-based Explanation Systems,
Saliency Maps (for CNNs), Integrated Gradients, XAI for Healthcare. Evaluation preference: Interpretability,
Accuracy of Explanations, User Trust, Computational Complexity, Scalability, Flexibility. The results indicate
that XAI for Healthcare ranks highest, while Saliency Maps (for CNNs) holds the lowest rank

REST Publisher, 2024
Artificial Intelligence (AI) has become a transformative force in computer science, revolutionizi... more Artificial Intelligence (AI) has become a transformative force in computer science, revolutionizing
technological advancement across multiple domains. This research explores the multifaceted applications of AI-
powered solutions, employing the Complex Proportional Assessment (COPRAS) technique to comprehensively
evaluate and prioritize innovative tools in computer science. The study analyzes five AI-powered solutions:
Automated Code Review Tool, Intelligent Bug Tracking System, AI-Based Software Testing Framework, Predictive
Maintenance for Data Centers, and NLP-Powered Chatbot for IT Support. These solutions are rigorously assessed
across four critical metrics: accuracy, efficiency, innovation, and resource usage. Through systematic multi-
criteria decision-making, the research reveals significant insights into AI's potential for solving complex problems,
enhancing operational efficiency, and enabling intelligent automation. The COPRAS methodology provides a
structured framework for comparing and ranking these technologies, highlighting their unique strengths and trade-
offs. Key findings demonstrate the NLP-Powered Chatbot for IT Support as the top-performing solution, achieving
the highest utility score and ranking first across evaluated metrics. The Automated Code Review Tool closely
followed, showcasing balanced performance and cost-effectiveness. The research underscores AI's transformative
potential in domains such as machine learning, natural language processing, computer vision, cybersecurity, and
software development. Moreover, it emphasizes the critical significance of moral issues and conscientious AI
development to guarantee fair and trustworthy technological solutions. By providing a comprehensive evaluation
approach, this study offers valuable insights for stakeholders seeking to leverage AI technologies, guiding strategic
decision-making in an increasingly complex technological landscape.
Uploads
Papers by Srinivasa Rao Kolusu
generate judgments or predictions. The complexity of AI systems, especially machine learning models, is
increasing. understanding their reasoning process becomes crucial for ensuring trust, fairness, and
accountability. Explainable AI (XAI) helps demystify the "black box" character of sophisticated models, Deep
neural networks, for example, which allows users to to grasp how inputs are transformed into outputs. In many
AI system judgments can have a big impact on industries including healthcare, banking, and law making
transparency a necessity. Explainable also aids in identifying and mitigating biases, improving model
performance, and complying with regulatory requirements. As AI technologies evolve, there is an increasing
emphasis on balancing model accuracy with interpretability, making some AI systems remain ethical,
transparent, and in line with human values. In artificial intelligence (AI) research, Explainable is essential for
fostering confidence, guaranteeing responsibility, and enhancing The openness of artificial intelligence
systems. As Artificial intelligence models, especially intricate ones like deep learning, become more widely
adopted, understanding their Processes for making decisions are crucial for validating their outcomes. The
goal of explainable AI (XAI) research is to create models interpretable so that users can comprehend the
decision-making process. This is particularly crucial in high-stakes industries like healthcare, banking, and
law, where poor or prejudiced choices can have serious repercussions. Explainable also supports regulatory
compliance, model improvement, and ethical AI deployment. An approach to decision-making known as
TOPSIS (Technique for Order of Preference by Similarity to Ideal Answer) evaluates how far an alternative
is from the worst-case situation and how close it is to the ideal solution. The worst-case solution shows the
lowest values, while the ideal solution shows the best values given the desired criteria. Each alternative is
given a similarity score by TOPSIS, which ranks them according to how near the ideal answer they are. This
method is frequently used to enhance decision-making in a variety of domains, including business, engineering,
environmental research, and healthcare. Alternative: LIME (Local Interpretable Model), SHAP (Shapley
Additive Explanations), Deep LIFT (Deep Learning Important Features), Anchor Explanations, ICE
(Individual Conditional Expectation), Counterfactual Explanations, Rule-based Explanation Systems,
Saliency Maps (for CNNs), Integrated Gradients, XAI for Healthcare. Evaluation preference: Interpretability,
Accuracy of Explanations, User Trust, Computational Complexity, Scalability, Flexibility. The results indicate
that XAI for Healthcare ranks highest, while Saliency Maps (for CNNs) holds the lowest rank
technological advancement across multiple domains. This research explores the multifaceted applications of AI-
powered solutions, employing the Complex Proportional Assessment (COPRAS) technique to comprehensively
evaluate and prioritize innovative tools in computer science. The study analyzes five AI-powered solutions:
Automated Code Review Tool, Intelligent Bug Tracking System, AI-Based Software Testing Framework, Predictive
Maintenance for Data Centers, and NLP-Powered Chatbot for IT Support. These solutions are rigorously assessed
across four critical metrics: accuracy, efficiency, innovation, and resource usage. Through systematic multi-
criteria decision-making, the research reveals significant insights into AI's potential for solving complex problems,
enhancing operational efficiency, and enabling intelligent automation. The COPRAS methodology provides a
structured framework for comparing and ranking these technologies, highlighting their unique strengths and trade-
offs. Key findings demonstrate the NLP-Powered Chatbot for IT Support as the top-performing solution, achieving
the highest utility score and ranking first across evaluated metrics. The Automated Code Review Tool closely
followed, showcasing balanced performance and cost-effectiveness. The research underscores AI's transformative
potential in domains such as machine learning, natural language processing, computer vision, cybersecurity, and
software development. Moreover, it emphasizes the critical significance of moral issues and conscientious AI
development to guarantee fair and trustworthy technological solutions. By providing a comprehensive evaluation
approach, this study offers valuable insights for stakeholders seeking to leverage AI technologies, guiding strategic
decision-making in an increasingly complex technological landscape.
generate judgments or predictions. The complexity of AI systems, especially machine learning models, is
increasing. understanding their reasoning process becomes crucial for ensuring trust, fairness, and
accountability. Explainable AI (XAI) helps demystify the "black box" character of sophisticated models, Deep
neural networks, for example, which allows users to to grasp how inputs are transformed into outputs. In many
AI system judgments can have a big impact on industries including healthcare, banking, and law making
transparency a necessity. Explainable also aids in identifying and mitigating biases, improving model
performance, and complying with regulatory requirements. As AI technologies evolve, there is an increasing
emphasis on balancing model accuracy with interpretability, making some AI systems remain ethical,
transparent, and in line with human values. In artificial intelligence (AI) research, Explainable is essential for
fostering confidence, guaranteeing responsibility, and enhancing The openness of artificial intelligence
systems. As Artificial intelligence models, especially intricate ones like deep learning, become more widely
adopted, understanding their Processes for making decisions are crucial for validating their outcomes. The
goal of explainable AI (XAI) research is to create models interpretable so that users can comprehend the
decision-making process. This is particularly crucial in high-stakes industries like healthcare, banking, and
law, where poor or prejudiced choices can have serious repercussions. Explainable also supports regulatory
compliance, model improvement, and ethical AI deployment. An approach to decision-making known as
TOPSIS (Technique for Order of Preference by Similarity to Ideal Answer) evaluates how far an alternative
is from the worst-case situation and how close it is to the ideal solution. The worst-case solution shows the
lowest values, while the ideal solution shows the best values given the desired criteria. Each alternative is
given a similarity score by TOPSIS, which ranks them according to how near the ideal answer they are. This
method is frequently used to enhance decision-making in a variety of domains, including business, engineering,
environmental research, and healthcare. Alternative: LIME (Local Interpretable Model), SHAP (Shapley
Additive Explanations), Deep LIFT (Deep Learning Important Features), Anchor Explanations, ICE
(Individual Conditional Expectation), Counterfactual Explanations, Rule-based Explanation Systems,
Saliency Maps (for CNNs), Integrated Gradients, XAI for Healthcare. Evaluation preference: Interpretability,
Accuracy of Explanations, User Trust, Computational Complexity, Scalability, Flexibility. The results indicate
that XAI for Healthcare ranks highest, while Saliency Maps (for CNNs) holds the lowest rank
technological advancement across multiple domains. This research explores the multifaceted applications of AI-
powered solutions, employing the Complex Proportional Assessment (COPRAS) technique to comprehensively
evaluate and prioritize innovative tools in computer science. The study analyzes five AI-powered solutions:
Automated Code Review Tool, Intelligent Bug Tracking System, AI-Based Software Testing Framework, Predictive
Maintenance for Data Centers, and NLP-Powered Chatbot for IT Support. These solutions are rigorously assessed
across four critical metrics: accuracy, efficiency, innovation, and resource usage. Through systematic multi-
criteria decision-making, the research reveals significant insights into AI's potential for solving complex problems,
enhancing operational efficiency, and enabling intelligent automation. The COPRAS methodology provides a
structured framework for comparing and ranking these technologies, highlighting their unique strengths and trade-
offs. Key findings demonstrate the NLP-Powered Chatbot for IT Support as the top-performing solution, achieving
the highest utility score and ranking first across evaluated metrics. The Automated Code Review Tool closely
followed, showcasing balanced performance and cost-effectiveness. The research underscores AI's transformative
potential in domains such as machine learning, natural language processing, computer vision, cybersecurity, and
software development. Moreover, it emphasizes the critical significance of moral issues and conscientious AI
development to guarantee fair and trustworthy technological solutions. By providing a comprehensive evaluation
approach, this study offers valuable insights for stakeholders seeking to leverage AI technologies, guiding strategic
decision-making in an increasingly complex technological landscape.