Use of Artificial Intelligence by Community Association Boards
Over the past year the world has experienced a sea of change, in the development and widespread availability of artificial intelligence programs. In simple terms, artificial intelligence, or “AI”, is technology that enables computers to perform complex tasks normally requiring human reasoning, such as learning, comprehension, problem solving, and decision making. Widely available platforms, such as Google Gemini and ChatGPT are now able to instantly pore through data either provided by a user or available on the internet and provide reasoned answers to questions. Like any new technology, artificial intelligence has the potential to be a valuable resource, but it also comes with certain potential hazards and must be used cautiously.
Most of us are already using artificial intelligence on a daily basis even if we don’t realize it. The top result in any Google search, for example, is usually an AI generated summary of information gleaned from websites that the program is able to find on free websites, such as Wikipedia. Similar programs can also be used to process and analyze data that is specified or provided by the user. Some organizations, including community associations, have begun looking into AI programs to help with reviewing voluminous records and providing answers to simple questions. While certain applications of AI, when used appropriately, can certainly provide time and cost saving benefits, community association boards must be mindful that AI is only a tool, and its results must be verified and confirmed before they can be relied upon.
We’ve all heard the stories of AI providing false or misleading information, sometimes with calamitous results for the user who put too much faith in the program. One of the areas most fraught with potential danger for association boards is using AI to analyze legal documents or to answer legal questions. An AI program can likely aid in pointing a user to the appropriate section of a document and provide some tools for interpreting what it means, but this information must be verified before it can appropriately be relied upon by a board member. Relying upon AI generated information without taking reasonable steps to verify that it is accurate could lead to potential claims that a board member failed to meet his or her fiduciary duty of care.
A commonly seen problem with AI generated responses to legal questions is that information is often pulled from blogs or chat threads that may be referring to law that is inapplicable in your jurisdiction or that may have been originally written by someone without the requisite knowledge or expertise. Other times AI programs generate responses that are wildly inaccurate simply because of a flaw or gap in the programming without any real explanation of what went wrong. It is also important to remember that AI can provide technical information and analysis that might be valuable when used appropriately as one available tool, but disputes over the meaning of declarations and other legal documents will ultimately be decided by human judges and juries, not a computer. A false sense of certainty based upon AI generated answers can easily cause a community association board to make unwise choices and avoidable errors.
It is important to confer with legal counsel prior to making major decisions that can have long-term impacts on your community. If your association is considering using AI or other new technologies to improve its operations, please reach out to Williams & Strohm, LLC at 614-228-0207 and speak with one of our attorneys to ensure that you do so in a way that adequately protects the interests of the association.
Brad Terman
Mr. Terman has been practicing since 2008 with experience in many areas of law including civil litigation, creditors’ rights, landlord/tenant, and community association law. Mr. Terman has extensive experience in bankruptcy and collection matters, and enforcement matters related to community associations. Read Brad Terman's full bio.