AI Task Force
Task Force Reports
Below you can view the final comprehensive report from the AI Task Force, as well as the final reports from each task force subcommittee.
Fresno State AI Task Force (AY 2023-2024) Final Report
Summary
The final reports provide a set of recommendations and actions for integrating AI across Fresno State. While there are variations in focus, the overarching themes of ethical guidelines, comprehensive training, robust infrastructure, and policy updates are consistent across committee recommendations. Implementing these recommendations will require coordinated efforts across various divisions and continuous review and updating to the rapidly evolving AI landscape. >
The steps outlined in these reports provide a framework for the immediate, short-term, and long-term integration of AI into Fresno State’s operations. Immediate actions focus on policy updates, training, and establishing foundational guidelines. Short-term goals emphasize the development of comprehensive AI policies, enhanced training programs, and partnerships. Long-term strategies involve continuous policy reviews, advanced infrastructure development, and fostering interdisciplinary and community partnerships to sustain AI ethical use and innovation.
- Similarities:
- Policy and Ethical Guidelines: Almost all committees emphasize the need for comprehensive AI policies and ethical guidelines.
- Training and Development: There is a unanimous recommendation for extensive training programs tailored to various university constituents.
- Infrastructure and Resources: Several reports highlight the necessity for a robust infrastructure to support AI integration.
- Differences:
- Focus Areas: Each committee focuses on different aspects of AI integration. For instance, the Workforce Development Committee emphasizes job market readiness, while the Academic Research & Innovation Committee focuses on research applications.
- Specificity of Recommendations: Some reports, such as the Ethical Guidelines and Work Integrity Training, provide detailed implementation steps, while others, like the University & Academic Policy Subcommittee, offer broader policy suggestions.
Key Recommendations
- Ethical and Policy Frameworks:
- Development and regular updates of AI ethical guidelines (Workforce Development, Academic Research & Innovation).
- Comprehensive AI policies for academic integrity and syllabus integration (Teaching & Learning, University & Academic Policy).
- Training and Development:
- Extensive training programs for faculty, staff, administrators, and students on AI tools and ethical use (Teaching & Learning, Work Integrity).
- Role-specific AI training to ensure relevance and applicability (Work Integrity).
- Infrastructure and Research Support:
- Investment in AI infrastructure and resources to support academic and research activities (Academic Research & Innovation).
- Establishment of AI research grant programs to promote innovation (Academic Research & Innovation).
- Privacy and Legal Considerations:
- Clear guidelines for data privacy and legal compliance in AI applications (Security, Classified Data, Legal Implications).
- Development of a centralized legal counsel to address AI-related issues (Security, Classified Data, Legal Implications).
Recommended Actions
Immediate Term (Next 6 Months)
Common Themes:
- Policy and Guideline Development: Several committees emphasize the urgent need to develop or update policies and ethical guidelines concerning AI (Teaching & Learning, University & Academic Policy, Workforce Development, Security, Classified Data, Legal Implications).
- Formal AI Initiative: Multiple reports recommend a formal AI Initiative with dedicated AI committees to oversee various aspects of AI integration, including ethical use, policy implementation, and research support (Workforce Development, Academic Research & Innovation).
- Training Initiatives: Immediate initiation of training programs for faculty, staff, and students to raise awareness and understanding of AI (Teaching & Learning, Academic Research & Innovation).
Unique Aspects:
- Workforce Preparedness: The Workforce Development committee focused on identifying key stakeholders and partners for AI workforce development, a step not explicitly mentioned by other committees.
- Legal Framework: The Security, Classified Data, and Legal Implications committee emphasized establishing legal counsel and inventorying data-generating areas, highlighting the legal and privacy concerns specific to AI.
Short Term (Next 1-3 Years)
Common Themes:
- Comprehensive Policy Formation: Developing and implementing broad AI policies that include annual reviews (University & Academic Policy, Security, Classified Data, Legal Implications).
- Expanded Training and Development: Specific AI tools training and holding conferences to keep faculty and staff updated (Teaching & Learning, Work Integrity Training).
- Research and Innovation Support: Establishing AI research grant programs and partnerships with industry leaders (Academic Research & Innovation).
Unique Aspects:
- Job Market Analysis: The Workforce Development committee included analyzing AI job market demands and evaluating student job preparation, which is a unique focus compared to other committees.
- Infrastructure Building: The Work Integrity committee specified building a Canvas training experience and integrating it into onboarding, showing a detailed step-by-step approach to implementing AI into day-to-day work.
Long Term (3+ Years)
Common Themes:
- Continuous Policy Review: Ongoing review and update of AI-related policies to adapt to evolving technology (University & Academic Policy, Workforce Development).
- Enhanced Infrastructure and Resources: Developing robust infrastructure to support advanced AI applications and integrating AI into the university’s strategic plan (Academic Research & Innovation).
- Strengthened Partnerships: Fostering interdisciplinary collaborations and community engagement for sustained AI innovation and workforce development (Academic Research & Innovation, Workforce Development).
Unique Aspects:
- Centralized AI Tools: The Teaching & Learning committee’s unique recommendation included implementing centralized Canvas AI integration tools and exploring AI-supported scheduling and degree options.
- Enhanced Cybersecurity: The Security, Classified Data, and Legal Implications committee focused on centralizing AI tools through Technology Services and enhancing cybersecurity measures, a focus on data integrity and protection not emphasized by other committees.
Highlights
- Focus Areas:
- Training and Policy: There is a consistent focus on immediate policy updates and training across all reports. The emphasis on developing ethical guidelines and policies in the immediate term reflects a common understanding of the foundational requirements for AI integration.
- Research and Innovation: Short-term and long-term steps highlight the importance of fostering research and innovation through grants and partnerships, ensuring that AI integration aligns with academic goals.
- Infrastructure and Tools: Long-term recommendations stress the need for robust infrastructure and centralized tools to support AI initiatives, showing a strategic approach to sustain AI use.
- Unique Contributions:
- Workforce Development: The unique focus on AI job market analysis and workforce preparedness highlighted the broader societal implications of AI, preparing students for future job markets.
- Legal and Privacy Concerns: The Security, Classified Data, and Legal Implications committee’s focus on legal frameworks and cybersecurity measures ensured that AI integration respects privacy and complies with legal standards, addressing potential risks.
Recommendations from Each Committee
Academic Research & Innovation Committee
Key Recommendations:
- AI Definitions: Establish clear definitions and applications of AI in academia.
- Ethical Use: Provide guidance on ethical AI use in research.
- Infrastructure: Ensure robust infrastructure to support AI integration.
- Research Grants: Create AI-focused research grant programs.
Unique Aspects:
- Comprehensive approach to integrating AI into academic research and innovation.
- Specific focus on research infrastructure and grant programs.
Security, Classified Data, Legal Implications Committee
Key Recommendations:
- Data Privacy: Establish clear guidelines for data privacy in AI applications.
- Legal Counsel: Develop legal counsel for AI-related issues.
- Policy Revisions: Regularly update AI policies to align with evolving legal standards.
Unique Aspects:
- Strong focus on legal and privacy aspects of AI use.
- Recommendations for creating a centralized AI policy framework.
Teaching & Learning Committee
Key Recommendations:
- Training: Extensive training for students, staff, and faculty on AI tools and their classroom applications.
- Policy Updates: Revision of academic and plagiarism policies to include AI considerations.
- Equity Issues: Address the disparity between free and paid AI tools.
- Implementation Phases: Immediate policy updates and training, medium-term AI-specific training and conferences, and long-term AI integration in course structures and scheduling.
Unique Aspects:
- Strong emphasis on training and development at all levels.
- Focus on immediate policy updates and equity issues in AI access.
University & Academic Policy Committee
Key Recommendations:
- Broad AI Policy: Development of comprehensive AI policies applicable to students, staff, and faculty.
- Syllabus Integration: Clear AI guidelines in course syllabi and academic integrity codes.
- Policy Reviews: Regular reviews of recruitment, retention, and faculty advancement policies to incorporate AI considerations.
Unique Aspects:
- Detailed focus on integrating AI policies across various academic and administrative documents.
- Emphasis on AI’s role in recruitment and retention strategies.
Work Integrity Committee
Key Recommendations:
- Training Modules: Comprehensive training on AI ethics, bias, and workplace applications.
- Interactive Learning: Use of interactive tools and quizzes to reinforce learning.
- Role-Specific Content: Tailored AI training for different university roles.
Unique Aspects:
- Detailed and structured training modules.
- Use of interactive content to enhance engagement and understanding.
Workforce Development Committee
Key Recommendations:
- Workforce Preparedness: Equip students with AI skills for the job market.
- Ethical AI Use: Develop guidelines for ethical AI practices.
- Partnerships: Strengthen industry partnerships to facilitate AI workforce development.
Unique Aspects:
- Focus on preparing the workforce for AI-related jobs.
- Addressing regional employment concerns related to AI.
AI Disclosure Statement
During the preparation of this work, the authors used ChatGPT, Grammarly, Microsoft Copilot, and Google Gemini in the writing process. After using these tools/services, the authors reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
Subcommittee Reports
Select a subcommittee below to review the respective final reports.
The Academic Research and Innovation subcommittee is charged to explore AI in research, scholarship, creative activities, and innovation.
Critical issues identified by the committee
- We must precisely define AI and its applications in academia, research, and innovation.
- Recent advancements in AI have shown significant potential to impact various societal sectors, including education. To effectively integrate this technology at Fresno State, we need to consider how it aligns with our educational mission and ethical values. While this working group is a step forward, a long-term strategy is necessary to keep pace with AI's evolving landscape.
- Addressing potential employment impacts requires ensuring fair treatment and protection for all employees.
- Faculty and students need clear guidance on ethically using AI tools to harness their productivity while upholding integrity and preserving independent thought. Providing faculty with access to curated tools and training can aid in successful technology adoption. Fresno State's infrastructure must be equipped to support these endeavors.
- Faculty and students require knowledge of how AI tools generate outputs, specifically how datasets may introduce bias to these outputs with human-generated labels in training data. This information will help Fresno State’s community understand why caution must be exercised in the use of generative AI tools and will also improve data literacy in today’s data-driven world.
- Fresno State faculty and student researchers who develop their own machine-learning models and use existing AI tools must learn about their own roles in introducing bias to models and the societal impacts their work may produce. This will ensure that Fresno State-developed tools are created with appropriately labeled data and can engender trust amongst fellow researchers and users of their outputs that they were developed ethically.
- Providing faculty with access to the tools, particularly a curated collection together with some training and guidance can also facilitate successful adaptation to this new technology. It is important to ensure that Fresno State has the infrastructure to provide this service.
- Given the size and influence of the collective CSU System that Fresno State is a component of, understanding how this larger context can facilitate adaptation to this new technology is crucial to providing the support needed to ensure we achieve the best possible outcome for everyone.
- Establish a research grant program specifically focused on AI-related projects, providing funding for faculty-led research initiatives, interdisciplinary collaborations, and student-faculty partnerships to advance AI knowledge and innovation.
- Create an innovation hub or center for AI research and development on campus, equipped with state-of-the-art facilities, resources, and expertise to support the prototyping, testing, and commercialization of AI-driven technologies and solutions.
Recommended steps to address critical issues:
Immediate term (next 6 months)
- Establish a task force or committee dedicated to defining AI and its applications within academia, research, and innovation at Fresno State.
- Conduct workshops and training sessions for faculty and staff to raise awareness of AI's potential impact on various societal segments, including education, and to discuss ethical considerations.
- Initiate discussions with relevant stakeholders, including faculty, students, and administration, to gather input on incorporating AI technology in alignment with the university's moral and ethical ideals.
- Assess the current infrastructure and resources available at Fresno State to support the integration of AI technology and identify any gaps or areas for improvement.
- Form a task force or committee composed of faculty, administrators, and industry experts to develop a framework for the AI research grant program, outlining eligibility criteria, funding mechanisms, and evaluation processes.
- Conduct a needs assessment and feasibility study to identify potential locations, resources, and infrastructure requirements for establishing the AI innovation hub or center on campus.
- Establish a citational practice that we promote as a campus standard.
- Develop a task force or committee for interdisciplinary research/teaching approaches
Short term (next 1-3 years)
- Develop comprehensive guidelines and policies for the ethical use of AI technology within the university community, addressing issues such as data privacy, bias mitigation, and transparency.
- Invest in faculty development programs focused on AI education and research, providing resources for curriculum development, research projects, and interdisciplinary collaborations.
- Establish partnerships with industry leaders, research institutions, and other universities to stay abreast of AI advancements and potential applications in academia.
- Expand the availability of AI tools and resources on campus, including access to curated collections, training programs, and technical support services for faculty, students, and researchers.
- Launch the AI research grant program, soliciting proposals from faculty members across departments and disciplines, and awarding funding to selected projects based on their potential for advancing AI knowledge and innovation.
- Secure funding and resources to establish the AI innovation hub or center, including securing partnerships with industry sponsors, securing grants, and allocating university resources for infrastructure development.
- Develop a curriculum that covers AI mechanics with the learning objectives of improving AI and data literacy to be used by both faculty and students.
Long term (3+ years)
-
- Integrate AI education and research into the university's strategic plan, incorporating AI-related initiatives into academic programs, research centers, and institutional priorities.
- Enhance infrastructure and technology resources to support advanced AI applications, such as high-performance computing clusters, data analytics platforms, and AI-driven research facilities.
- Foster interdisciplinary collaborations and cross-departmental partnerships to leverage AI expertise and resources across different fields of study.
- Continuously monitor and evaluate the impact of AI integration at Fresno State, soliciting feedback from stakeholders and adjusting strategies as needed to ensure alignment with the university's mission and values.
- Expand the AI research grant program to include more funding opportunities, such as seed grants, collaborative grants, and large-scale grants, to support a broader range of AI-related projects and initiatives.
- Fully operationalize the AI innovation hub or center, providing state-of-the-art facilities, equipment, and expertise to support the prototyping, testing, and commercialization of AI-driven technologies and solutions. Additionally, establish partnerships with external organizations and industry leaders to foster collaboration and support ongoing innovation in AI research and development.
- CSU Fresno to implement AI tools to provide an easier experience for students and faculty to gain university information about processes and resources.
- Develop a long-term development plan for AI talent development at Fresno State, including initiatives to attract, train, and retain top AI researchers, educators, and students.
The Security, Classified Data, and Legal Implications subcommittee is charged to explore the impact of AI on security and privacy of employees and stakeholders.
Executive Summary
The Security, Classified Data, and Legal Implications sub-committee of the AI Task Force at Fresno State University presents its final report, encompassing a thorough analysis of the current AI policy landscape, data handling practices, and policy revisions required for effective AI implementation. The committee reviewed existing bills/policies, including the White House AI Blueprint, and identified key issues such as privacy concerns, outdated policies, and legal uncertainties surrounding AI adoption. Recommendations include immediate actions like establishing legal counsel and conducting a data inventory, short-term goals of developing campus-wide AI policies, and long-term strategies for centralizing AI tools and enhancing cybersecurity. The report underscores the need for collaboration with campus leadership and legal experts to ensure compliance and responsible AI integration. Additionally, it suggests continued Task Force activities and exploration of legal implications to address emerging challenges effectively.
Critical issues identified by the committee
- Privacy and user data continue to be a concern when implementing AI tools.
- Between all campus groups surveyed (faculty, staff, MPPs, and Students) a vast majority agreed that they were concerned with privacy and transparency of AI algorithms.
- From our article research, technology vendors have issues with determining what is in the code for different generative AI programs (the code is not shared).
- The AI Legal Landscape is changing rapidly and its effects on higher education operations are untested.
- The blueprint that the Biden administration has shared about policies and procedures for AI adoption in organizations.
- AI being used in admissions offices is increasing, but concerns about the gamification that may occur and increased discrimination against minority students.
- A lack of clear of data pathways and handling at Fresno State
- Admissions, Recruitment, and Enrollment Management
- Financial Aid Services
- Student Health and Counseling Center
- Some areas that handle large amounts of sensitive data:
- Vague/potentially outdated policies involving data handling and classification as it pertains to AI adoption.
- Tech Services policies that were enacted before 2010.
- Need to review chancellor’s office tech policies.
- Required proper guidance from experts regarding legal implications.
Recommended steps to address critical issues
Immediate term (next 6 months)
- A legal counsel established to understand the legal ramifications of AI adoption on campus.
- A full inventory of data generating areas on campus that handle and use sensitive user information.
- A temporary guideline/discussion with campus about handling sensitive data based on current policies.
Short term (next 1-3 years)
- Campus-Wide AI policy created that considers multiple campus groups, as well as Technology Services to ensure data privacy and security are accounted for that emphasizes transparency .
- Clear legal policies that follow applicable laws and guidelines regarding AI use in higher education.
Long term (3+ years)
- Centralizing and integration of AI tools through Technology Services, including possibly creating in-house programs that limit data shared to external vendors.
- Enhanced cybersecurity
- Consistent AI utilization for testing and validating data to ensure data integrity.
The Teaching and Learning subcommittee is charged to explore the use of AI in educational settings throughout the university.
Critical issues identified by the committee
- Training: Need for training opportunities for students, staff, and faculty in terms of using and referencing AI for teaching and learning - opportunities for learning AI tools (text, image, audio, code, etc. forms of generative AI) and their use for classroom purposes.
- Policy: Syllabus template language for AI usage and updating of academic and plagiarism policy with AI considerations (Potential for development of fair and transparent department/college level policies).
- Equity issues: Many AI tools have paid options. Distinction between free and paid options of AI in terms of student usage.
Recommendations to address critical issues
Immediate term (next 6 months)
- Need for an update of academic and plagiarism policy that considers AI teaching and learning tools.
- 70%+ of faculty interested in formal training (AI Survey). Already a CSU-wide AI teaching and learning tools course available for faculty. General AI applications rather than specific disciplinary courses. Interested faculty directed to other courses external to Fresno State from the IDEAS Center.
- Student AI training - extension of library training on referencing.
- On-going task force with faculty, staff, and student representatives that monitor and make recommendations based on new AI teaching and learning tools.
Short term (next 1-3 years)
- AI tools specific training (faculty, staff, and students) - text, image, code, audio, etc. generative AI.
- Annual or biannual conference/session with an AI focus (part of TIP conference; not necessarily a central focus).
Long term (3+ years)
- Potential of a centralized Canvas AI integration tool (Enterprise Option).
- AI for supporting scheduling.
- Degree options for AI - particular AI courses.
The University and Academic Policies subcommittee is charged to explore potential impacts to university and academic policies by the integration of AI throughout the organization.
Critical issues identified by the committee
- Need a broad AI policy for a variety of use but applied to students, staff, and faculty
- This can be in either an APM amendment (via the Academic Senate) or University Policy (via Presidential Order)
- For academic policies, there is an urgent need to include clear instructions about
AI language in the course syllabus. Work with the office of IDEAs on AI to create
a syllabus template. Important academic policies that may impact
- APM 235 - POLICY AND PROCEDURES ON CHEATING & PLAGIARISM
- Update the Examples sections to include AI examples
- APM 236 - CODE OF ACADEMIC INTEGRITY HONOR CODE
- APM 216 - STANDARDS FOR WRITING COURSE REQUIREMENTS
- APM 235 - POLICY AND PROCEDURES ON CHEATING & PLAGIARISM
- Explore the possibility of using AI advising - the advising policies need to be reviewed
- AI can effectively used in recruitment and retention policies for
- Students
- Faculty
- Staff
Recommendations to address critical issues
Immediate term (next 6 months)
- Focus on APM section 200: Include appropriate language about AI - Take inputs from other subcommittees
- Review Recruitment, Retention Policies
- Review policies related to student recruitment and retention - Work with OIE to create a data-driven approach in shaping policies on recruitment and retention APM section 200 and section 400
- Review policies related to recruitment /retention and advancement of faculty – Will need to work with HR and the Office of Faculty Affairs- Impacted APM Sections: 200, 300, 400, 500, and 700 - Work with other subcommittees and modify policies
- Review policies related to recruitment and retention of non-academic staff. Possible impacted APM section: 300, 600, and 700
Short term (next 1-3 years)
- Creation of a broad AI policy/policies for academic and university purposes - Annual review mechanism as the field evolves
- Suggest /recommend changes Section 300 (Personnel), Section 400 (Student Affairs), and APM 500 (Research Compliance)
Long term (3+ years)
-
-
- Continuous review of policies that may be impacted by AI
-
The Work Integrity subcommittee is charged to explore the use of AI in job tasks at every level of the university.
Recommended Ethical Guidelines
As employees of a state institution of higher education, it is important to understand and evaluate the use of generative artificial intelligence (A.I.) tools in our everyday working environment. It is imperative that employees use a critical and ethical lens while interacting with A.I. to ensure our institution's standards of diversity, equity and inclusion are upheld. It is also important to recognize that A.I. is often owned by corporations which may use prompts, or other input data, for other purposes not fully known or understood by employees. Therefore, it is imperative that protected data not be used when interacting with A.I. unless specifically authorized.
Our underlying principle for our recommended ethical guidelines for work integrity is a human must be in the loop and be responsible for the work generated. A.I. can be inaccurate, hallucinate, unethical and reflect the biases in society based on the training materials it was trained on. Employees must be mindful of these, and other potential limitations and take agency and responsibility for the work they produce.
Limitations: It is important to note the following recommended ethical guidelines are intended for the management, faculty, staff, and students who are employed by the institution and using A.I. as part of their job function. Additional guidelines will be necessary to address academic issues such as cheating and plagiarism in the educational setting or A.I. development and programming issues. It is also important to note that these recommended guidelines are a starting point and may be expanded upon or changed to fit other assumptions or perspectives.
- A human must be in the loop: A.I. should not be solely used for critical decisions about admissions, enrollment, hiring, or discipline. A human must make the final decision.
- Accountability: Employees are held accountable for their work and decision-making, whether A.I. is used or not.
- Diversity, Equity, and Inclusion: Because many A.I. systems use human-generated data, the A.I. may reflect the unconscious or conscious diversity, equity and inclusion biases embedded in society. Employees must be vigilant in identifying and correcting these biases when using A.I. systems. Systems that repeatedly produce biased results should be reported to the A.I. Task Force for further evaluation and may be recommended for removal campus-wide.
- Transparency: A.I. use must be transparent. Stakeholders must be informed about the types of A.I. being utilized, the purpose for which they are used, and the data that is being collected. The establishment or maintenance of an A.I.-based secret profiling system for any purpose is prohibited.
- Data Privacy: Employees must adhere strictly to existing data privacy and security laws to ensure all personal data is collected, stored, and used according to university policies. Level 1 (Confidential) Data and Level 2 (Business/Internal Use) Data of the Protected Data Storage Guidelines should not be used with A.I. tools unless specifically cleared by Technology Services.
- Consent: Where applicable, obtain explicit consent from individuals before collecting and using their data for A.I. purposes.
- Plagiarism, Copyright and Trademarks: A.I. users should be aware the output may plagiarize the works of others or include copyrighted or trademarked material without permission. Being responsible for their work, employees should take reasonable steps to ensure their work is free from copyright or trademark infringement and plagiarism.
- Judgment: Employees must understand that A.I. can provide incorrect or biased information. Employees must validate and evaluate the accuracy of A.I. by verifying against other sources.
Recommendations to address critical issues
Immediate term (next 6 months)
- Draft a set of A.I. Ethical Guidelines accessible to all campus stakeholders for review and comment. (See recommended guidelines below)
- It is recommended that an A.I. Task Force be formed to ensure the ethical use of A.I. within Fresno State. The committee will consist of student leaders, campus IT, faculty, DEI, staff, and administrators in consultation with an expert ethicist. The task force’s responsibility will be to oversee the implementation and use of A.I. systems and technologies within the university.
- The A.I. Task Force will review A.I. use cases, policies, and practices regularly to ensure they comply with ethical standards. Additionally, the task force will work with campus administration to provide training to all university community members on the ethical use of A.I. This includes recognition of biases and understanding of privacy implications.
- The task force will also conduct awareness campaigns to keep the university community and other stakeholders informed about A.I. use and its impact. These guidelines are designed to create an environment of trust and respect, ensuring that A.I. technologies are used in a way that benefits the university community while upholding the highest ethical standards.
- Overall, forming an A.I. Task Force is a crucial step toward ensuring the safe and fair use of A.I. within Fresno State.
Short term (next 1-3 years)
- Finalize the campus A.I. Ethical Guidelines for implementation.
- A.I. Ethical Guidelines should be included in any A.I. training.
Long term (3+ years)
- The A.I. Ethical Guidelines should be reviewed at least every three years and updated as needed to remain relevant as technology evolves and progresses.
The Workforce Development subcommittee is charged to explore the role of the university in creating programs to prepare faculty, staff, and students for the future of work leveraging AI.
Critical issues identified by the committee
Workforce Preparedness and Education
The committee discussed the role of universities in preparing faculty, staff, and students for AI-related work. Committee members identified the need to equip graduates with the necessary skills and tools to thrive in an AI-driven job market. The discussion also touched on the importance of promoting cross-disciplinary and integrative use of AI, particularly in areas like written communication. Additionally, concerns were raised about the current shortage of AI employers in the Central Valley and how this impacts students seeking employment. The impact of AI on job markets for students, including the types of jobs that will be created and those that may disappear, was a key point of interest.
Challenges and Concerns in AI Adoption
The committee discussed the challenges and concerns associated with the adoption of AI in workforce development. Some committee members recognized the fast-paced nature of AI and the urgency to establish frameworks for its effective use. There was also discussion about identifying the point at which AI becomes a hindrance rather than an asset, and how the widespread use of AI can influence judgment. The meeting explored the importance of ensuring equality in AI usage that can help students (particularly students from underrepresented groups) in resume development and cover letter preparation.
Ethical and Responsible AI Use
The meetings also discussed ethical and responsible AI use. Committee members sought to define integrity for the committee within the context of AI and explore what constitutes ethical and unethical AI practices. Questions arose about when and how to disclose the use of AI in writing. Discussions also centered on the ethical considerations in various fields, such as criminal justice, law enforcement, and government work. The meeting highlighted the potential biases that AI systems can reflect, particularly in health records and law enforcement. Additionally, researcher ethics and the standards related to HEPA, FERPA, and patents were discussed.
Partnerships and Collaboration:
The importance of strengthening partnerships between educational institutions and industry was emphasized during the meeting. Committee members recognized the need to connect with other subcommittees within the task force (e.g., Work Integrity, Security), to facilitate the exchange of relevant information. Collaboration with other groups was considered crucial to address the multifaceted aspects of AI. The meeting also discussed the development of a guidebook for students to aid them in selecting and using AI tools appropriately, including field-specific considerations.
AI Design and Bias:
The final area of discussion centered around AI design and bias within the context of workforce development. Committee members explored how AI programs are designed and trained by humans and the implications for workforce training and development. The meeting highlighted the potential for implicit biases in AI systems and discussed their impact on workforce training programs. Participants also raised questions about ensuring that AI operates free from bias and discussed strategies to achieve unbiased and ethical AI use in workforce development.
Recommendations to address critical issues
Immediate term (next 6 months)
- Understand AI- Workforce Development preparation and education processes.
- Identify key stakeholders and major partners for local area AI workforce development
Short term (next 1-3 years)
- Identify the number and type of AI jobs available/and their labor market demand.
- Analyze and evaluate student education/job preparation to secure employment in AI.
- Begin the conversation on the ways in which university faculty are involved in and support student AI and workforce development/preparation.
Long term (3+ years)
- Discuss strategies to enhance student/faculty/staff AI preparation- education, training, and workforce development.
- Enhance community engagement and partnerships in AI preparation and workforce development.