<img alt="" src="https://secure.enterprise-consortiumoperation.com/792484.png" style="display:none;">
Schedule a Demo
ai school safety

3 Things to Consider When Integrating AI in School Safety

Kelly Moore
March 26, 2024

What is AI, and what is it good for? These were the questions I started thinking about last year when we began our partnership with ZeroEyes. ZeroEyes is a company that uses AI to read live video feed and detect the brandishing of firearms. It is more commonly referred to as gun detection. Almost everywhere I look, someone somewhere is talking about AI and what it can do. Some understand the amount of caution needed when integrating AI into everyday life, but some look to AI to solve all of their problems without fully understanding what it is.

  1. The Decision Matrix
    Before we delve into what AI can do for us from the school safety perspective, let’s look at decision matrices. A decision matrix is a tool that enables a team or individual to methodically identify, analyze, and evaluate the significance of connections between various sets of information. This matrix is particularly beneficial for examining a wide array of decision factors and determining the relative significance of each one. The problem is some people rely on the matrix to decide for them; if this, then that. This allows people to make decisions without understanding their decision because it’s based on the matrix. It also allows those who don’t understand the need to make decisions to blame the matrix for the lack of decisions being made. The information was sufficient to decide but didn’t align perfectly with the matrix.

  2. Context
    In the realm of school safety, AI is going to be used by many to make decisions for them. Decisions they don’t fully understand. AI needs to be considered information presented to us, not to make the decision for us.  The information AI provides us has to be validated, and context has to be added. Context? Yes, context has to be added to understand the scope of the information that AI is giving us. For example, ZeroEyes has to validate what they see by trained military and law enforcement professionals at two monitoring centers. Once an alert is triggered, alerting them of the possibility of a gun being present, they pull up that image and verify that it is 1) a gun, and 2) it poses a threat. They want to make sure the spotted gun doesn’t belong to a law enforcement officer (SRO) who has a reasonable expectation of having a gun on a school campus. AI only recognizes that a gun is present, not why the gun is present.

  3. Accepting Responsibility
    As we become better acquainted with the value of integrating AI into our lives, we must also accept the responsibility and consequences for using the information we get from AI. After all, we are responsible for our decisions and cannot blindly rely on AI to be correct when it has difficulty verifying and putting context to the information it provides. We will still have to understand our decisions and why we made them; if it goes wrong, you will not be allowed to point to AI as the reason for the mistake.

Subscribe by Email

No Comments Yet

Let us know what you think