At my job as a CTO I have been navigating the expectations and fears of AI use in software development in my company. My personal opinion on AI tools, and in particular LLM-based tools, is grounded in the fact that they are tools, not magic. There is no intelligence or conciousness in the LLM-based tools. However these complex data-trained and probability-based text generators, can do impressive things, given the right training and prompting. And they can do many things that can be of great use in various aspects of software development.
We are a healthcare company so we handle and manage large amounts of sensitive personal data. And there are many laws and regulations we need to comply with. We are also ISO 27001 certified and are currently working on complying with the NIS2 regulatory requirements. This setting creates a lot of challenges when looking at using AI tools in digital and software development.
We have had many insightful and interesting discussions in my team about both the possibilities and risks of AI tool use in our development. As in all teams there are early adopters and late adopters, there are optimists and pessimists. Also, all individuals have a mix of those attitudes regarding different aspects of AI use. We all agree that we want to avoid hype-driven use of AI just for the sake of it, but also that we want to investigate and also use AI tools where it really can help us and enhance productivity. Based on my opinions and the discussions in my team, I have written a draft for an AI policy that I want to guide our use of AI tools.
A draft of the policy
Within all digital and software development, many different types of activities of very different natures are carried out. There are many different AI technologies, AI tools and AI models that are all suited for different types of activities. This AI policy regulates the use of AI in digital and software development work, including UX and UI design, system design, programming, testing and operation log and problem analysis.
The principles below are primarily written with the use of AI tools and models that use Machine Learning, Deep Learning or Large Language Model technologies in mind.
- Principle 1: All use of AI tools must be concrete and clearly describe purpose and value.
- Principle 2: All decisions about the use of AI tools must be based on a formal risk-assessment.
- Principle 3: All use of AI tools and LLM models should be done with self-hosted AI tools and LLM models. Use of an externally hosted AI service, must be approved by the CTO.
- Principle 4: All results from the use of AI tools must be understood and accountable by the employee using the AI tool.
- Principle 5: Employees who use AI tools to perform any work or generate any results are always personally responsible for the results of the work or what is generated and handed over or communicated to others.
- Principle 6: No personal data or customer data may be used to train AI models or to prompt LLM models and AI tools. Use test data from our curated test data sets.
- Principle 7: No other material (files) may be used to train AI models or to prompt LLM models and AI tools in the AI tools used, unless a a formal risk assessment and decision has been made to do so, and that decision is then specific to a certain category of material, and for designated AI tools and AI models.
- Principle 8: All use of AI tools and AI models in digital and software development for experimentation and evaluation must be approved by the CTO.