- AI in Senior Living
- Posts
- AI Basics: What it is, Harmful Applications, and Responsible Adoption
AI Basics: What it is, Harmful Applications, and Responsible Adoption
A recent Gallup-Telescope survey reveals that while nearly all Americans (99%) use AI-enabled products—often unknowingly—public sentiment toward AI remains predominantly negative. Despite the widespread use of AI in tools like navigation apps, virtual assistants, and streaming services, 72% of respondents had a negative view of AI’s role in spreading misinformation, and 64% were concerned about its impact on social connections. However, 61% held a positive view of AI’s potential in medical diagnostics and treatment.
The survey highlights a general confusion about what constitutes true AI versus basic computer programs. Additionally, most Americans believe both government and businesses share responsibility for addressing AI-related challenges, such as misinformation, privacy violations, and job losses. When it comes to national security threats, however, 62% believe the government should take the lead.
Although perceptions of AI are bleak, this is unlikely to reduce usage of AI-enabled products. The findings underscore public demand for collaborative regulation between businesses and government to address AI's risks effectively. The poll surveyed 3,975 adults between Nov. 26 and Dec. 4, 2024, with a margin of error of ±2.6 percentage points.
ZNest’s Take
Key Takeaways
Basic Programs vs AI: Basic programs follow rules, can’t adapt or learn. AI learns from data, makes predictions, and improves over time.
Blended Systems: People are confused because many apps (e.g., navigation) combine traditional programs with AI.
Negative Sentiments Around AI: There are genuine issues regarding AI that must be addressed such as poorly designed AI producing biased or unreliable results and fake media (videos, images, audio) that appear real.
Positive Potential: AI has many beneficial applications and is becoming unavoidable.
Safe AI Adoption: To safely adopt AI, make sure your software provider:
Protects sensitive data and remove PII.
Audits for bias, allows for human oversight, and tests rigorously.
Rolls out AI systems step-by-step, trains users, and monitors for issues.
Let us first discuss the difference between a basic computer program and AI. The simplest explanation is that a basic computer program is rule-based. These programs follow a set of predefined instructions written by programmers, making their behavior entirely predictable. At their core, they operate like calculators: when given an input, they provide a specific output. Even the most complex programs are essentially a series of layered "if-then" statements. As a result, basic programs cannot analyze data to find patterns or make predictions beyond what is explicitly coded.
Think of a bakery. A basic computer program is like following a recipe exactly as written. If the recipe says, "Bake at 350°F for 12 minutes," it does that every time. It can’t change or adapt unless someone rewrites the recipe. It only does what it’s told.
AI, on the other hand, can learn from data and improve over time without being explicitly reprogrammed. It can analyze complex data, make predictions, and provide recommendations, often mimicking human-like reasoning.
Returning to the bakery example, AI is like a baker who learns how to bake cookies and gets better each time. The baker may follow the recipe the first time but then notice that the oven doesn’t actually reach 350°F unless set to 385°F, and they adjust the recipe accordingly. AI learns from experience, adapts, and makes decisions based on new information.
A lot of confusion arises because most applications today are not binary—meaning they are not entirely one or the other. Many widely used applications blend AI features with traditional programming. A good example of this is a navigation app. It combines basic algorithms for route calculation with AI for real-time traffic predictions to provide the fastest route.
The overall negative public sentiment around AI is not without merit. There are legitimate concerns, ranging from copyright infringement issues to the increasing prevalence of “slop,” deep fakes, and beyond.
Slop refers to poorly designed, implemented, or misused artificial intelligence systems that produce low-quality, biased, or unreliable outputs. In other words, it’s AI that prioritizes speed or cost over accuracy, fairness, and responsibility.
Deep fakes are realistic yet fake digital media—typically videos, images, or audio—that use AI to alter or create content that appears authentic. Examples of deep fakes include:
Face swapping: Replacing one person’s face with another in a video, making it look like someone (e.g., a celebrity) said or did something they didn’t.
Voice mimicking: Generating audio that mimics someone’s voice to create fake phone calls or speeches.
Lip syncing: Manipulating video so it appears that someone is saying words they never actually said.
If all of this sounds scary, it’s because it is. However, it’s also important to understand that AI has genuinely positive applications. AI will only become more prevalent, whether the general public wants it to or not. While there will always be bad actors, what’s critical is educating yourself and your communities on what AI is, what it isn’t, and its capabilities, limitations, and applications.
When deciding which technologies and platforms to adopt, ensure your software provider meets the following standards:
Data Privacy and Security
Securely store sensitive data.
If data is used for further training, ensure it is not shared with third parties.
Remove or mask all personally identifiable information (PII) in datasets to protect user privacy.
Reliability and Transparency
Regularly audit AI models for biases that could lead to unfair or harmful outcomes.
Clearly communicate how AI models are used and the basis for decisions, especially for high-impact applications.
Maintain a human-in-the-loop process for critical decisions where mistakes could have significant consequences.
Rigorously test AI models in real-world scenarios to identify edge cases and failure modes.
Design systems to fail safely and gracefully in unexpected situations.
Continuously monitor AI systems to ensure they perform as expected and adapt to changing conditions.
Deployment and Post-Deployment Service
Deploy AI solutions in a phased manner to test performance and gather feedback in real-world conditions.
Incorporate fallback mechanisms or redundant systems in case AI components fail.
Train employees and end-users on the proper use and limitations of AI systems, including addressing safety concerns.
Continuously evaluate AI systems post-deployment for unintended consequences.
Update models or processes to address new safety concerns as they emerge.
Have a topic you would like us to cover? Please let us know!