Working Well With AI

Habits and security rules for individuals and organizations.

Using AI is a cultural, behavioural, and societal shift that requires us to be intentional when we use it, how we use it, how much we use it. I’ve included some generic usage rules, what to do and what not to do. Please note that this is advice shaped by my own practice, the governance work I do, and by watching how AI use is going wrong in organizations and in individual research. It is written for people who are using AI in serious work that impacts systems and people directly or indirectly, namely research, writing, analysis, consulting, building. This is for anyone who wants to do impactful without compromising the integrity of what they produce or who they are accountable to.

This is not an exhaustive list of usage rules, but a set of secure and digital habits I think matter most:

RuleDoDon’tWhy
Treat AI as instrumentation, not authorshipUse AI to accelerate work you already understand: drafting, structural critique, synthesis across material you have read, code in languages you can debug.Use AI to produce work in domains where you cannot evaluate the output.If you cannot tell when the model is wrong, you are not using a tool but are deferring to a system whose reasoning you cannot inspect. Most people enter that relationship without noticing.
Disclose when AI shaped the outputTell readers, clients, and collaborators when AI materially contributed: substantive drafting, structural editing, analytical assistance, code generation.Disclose routine use. Spell-check, transcription, search do not need separate flagging. Undisclosed AI involvement, once discovered, retroactively damages every other piece of work you have ever published. Disclosure is a hedge against that.
Keep human judgment on anything that touches a personRoute every output that affects identifiable people, communities, or vulnerable populations through direct human review before it leaves your desk.Rely on AI to make or recommend consequential decisions about individuals without meaningful human oversight. This includes hiring, lending, housing, eligibility, clinical triage.This habit is most likely to be eroded by speed. The faster the workflow, the more tempting it is to let the system decide. Build the friction in deliberately.
Verify, don’t citeTreat AI outputs as hypotheses to check against primary sources.Cite AI outputs as if they were findings.They are plausible reconstructions of patterns in training data, and they are sometimes wrong in ways that look exactly like being right. If there is no underlying source, the claim probably should not be in your work.
Mind the energy going outMatch the tool to the task. Smaller models, local models, or no model at all are often sufficient, and sometimes better.Use large models for tasks a small one can do, or any model for tasks that do not need one.Compute is not a free resource. It has environmental, infrastructural, and political costs. This is the habit most people skip, and the one that compounds the most over a career.
Refuse imitation of real peopleUse AI to write in your own voice, in invented voices, or in genre conventions.Use AI to imitate the voice, likeness, signature, or style of real, identifiable people without their explicit consent.Synthetic quotes, simulated interviews, and stylistic forgeries are deception. The harm is to the broader information environment, which depends on the assumption that attributed speech was actually spoken.
Refuse manipulation infrastructureUse AI for clear, persuasive communication that respects the reader’s capacity to evaluate it.Use AI to produce content designed to manipulate through deception. This can be synthetic media presented as authentic, astroturfed commentary, fabricated source material, or persuasion systems targeting psychological vulnerabilities.The fact that the technology makes this easy does not make it acceptable. It makes refusal more important, not less.
Sign what you produceAsk, before publishing or delivering: would I sign this without the AI?Let AI extend your practice into domains where you have no standing.Tools amplify competence. They do not create it. Work you would not be willing to defend on your own is work you should not be putting into the world.
Date and version your practiceWrite down how you use AI, date it, and revise it.Treat your AI practice as a private matter.Published practice is harder to erode than private practice. It gives you something to drift from rather than drifting into, and gives the people who work with you something to hold you to.
Know where your data livesMap, for every AI tool: where prompts are processed, where outputs are stored, how long they are retained, who has access, and under whose jurisdiction. Assume that because a tool is widely used, it is safe for your data.Widely used tools are widely used because they are convenient. Convenience and security are often in tension. If you cannot answer those questions for a tool you are already using, that is the first thing to fix.
Treat data geography as a security decisionKnow which jurisdictions your data passes through, and choose tools whose data routing is compatible with your obligations. For Canadian organizations, this includes PIPEDA and provincial equivalents.Treat “the cloud” as a location.Every piece of data is somewhere physical, governed by some specific legal regime, accessible to some specific set of state and corporate actors. For people whose safety depends on data not reaching their home government, this is the whole question.
Validate third-party integrations before you connect themReview every third-party AI integration (plugins, extensions, API connectors, “AI-powered” features) the same way you would review a new vendor.Enable AI features by default in tools you already use.Many platforms now ship AI capabilities that are turned on automatically and may send existing data to systems you have not vetted. Email clients, collaboration tools, CRMs, and browser extensions are the most common quiet exposures.
Segment by sensitivityClassify data into public, internal, confidential, and protected. Then decide in advance which AI tools each category is allowed to touch. This can be a part of onboarding.Use the same AI tool for drafting a public blog post and for analyzing beneficiary case files.The convenience of one tool for everything is exactly the problem. The classification is what tells you when to slow down.
Watch for intersectional exposureThink about how AI use compounds existing vulnerabilities for the people you serve.Assume that anonymization protects beneficiaries.Modern AI systems are good at re-identification across multiple tools, prompts, and outputs over time. The exposure compounds with use. For anyone whose safety depends on information asymmetry, this rule does the most work.
Build a kill switchMaintain a documented process for disconnecting AI tools quickly: a vendor breach, a policy change, a compromised account, a tool acquired by an actor you do not trust.Build workflows so dependent on a specific AI tool that you cannot stop using it.The technology and vendor landscape are unstable. Tools change ownership, pricing, terms, get acquired, get banned in jurisdictions you operate in. Resilience means being able to move.
Train people, not just the policyMake sure everyone who uses AI tools understands what data is sensitive, what tools are approved for what purposes, and what to do if something goes wrong. Repeat the training.Treat AI security as the IT person’s problem.The most common security failures happen when we paste a sensitive document into a free chatbot because we’re in a hurry.

Building Your Own Version

Please use this page as a starting point, tailor to your usage rules. This is not a compliance checklist, but feel free to turn it into something operational:

  • Identify the data categories specific to your work and write the rules in those terms. This could be beneficiary records, partner intelligence, donor information, fieldwork, internal strategy, etc.
  • Identify the jurisdictions specific to your operations and your beneficiaries, and name them. These are the new digital hygiene usage rules and may not comply to real legal obligations.
  • Identify the AI tools currently in use across your organization.Self-suditing inventory is significantly underestimated, but is often the most valuable step.
  • Maintain version control and ownership.
  • Publish it internally r externally so the staff, and the organizations you partner with, know what to expect.

If your organization needs this translated into an operational policy specifically scoped to your data, jurisdictions, beneficiaries, and risk profile, please reach out.