Buffalo’s Top ChatGPT Training Class

Buffalo, NY – With demand for AI education soaring, one Buffalo-based expert has created the most sought-after ChatGPT training program in Western New York. Anthony DeSimone, CPA, CMA, and owner of You’re the Expert Now, LLC, has taught his ChatGPT Accelerator Class 14 times since launching it just five months after ChatGPT was first released.

“The class launched in early 2023, only five months after ChatGPT became available, and it has been filled to capacity 14 times,” DeSimone said. “Since that first session, the content has been transformed, with about 95% of it being new. What makes it stand out is that every version is refreshed to match the pace of change in the technology. I update each class with the newest features, icons, and real-world applications so participants are always learning what is current.”

Training that Stays Ahead of the Curve

Participants receive far more than a basic introduction. The Accelerator includes prompt engineering training, showing attendees how to write precise, effective prompts that generate accurate results. Students also learn how to personalize ChatGPT so it “knows” them well enough to write in their style, conduct stronger searches, and deliver a more productive experience.

“This technology is very powerful, but it’s not an end-to-end tool,” DeSimone emphasized. “There must be human involvement at the beginning and the end whenever using generative AI products. That’s why I teach the philosophy of people first, people last. It’s a simple way to remind everyone that human judgment is essential.”

Real-World Focus and Risk Awareness

What makes the course especially valuable is its focus on immediate, practical outcomes. Attendees leave with a prompt cheat sheet, takeaway PDFs, and in-class exercises that can be applied at work the very next day. The program also addresses AI risks and how to manage them responsibly.

“I want people to walk away understanding both sides,” DeSimone said. “On one hand, you can save time, improve efficiency, and generate ideas you might never think of on your own. On the other hand, you need to know how to spot hallucinations, address bias, and protect sensitive information. We don’t just talk about the risks; we show solutions that work.”

Exposure to the Wider AI Landscape

The training also opens the door to other generative AI tools for social media, transcription, image creation, video production, and chatbots. “Most people come in thinking ChatGPT is the whole picture,” DeSimone explained. “By the time they leave, they realize it’s just the starting point of a much larger ecosystem of tools.”

Why This Class Leads Buffalo

With his experience as a CPA, consultant, and adjunct professor at the University at Buffalo, DeSimone blends business strategy with technical expertise. His ability to keep the course content current and highly relevant makes this program Buffalo’s top-ranked ChatGPT training.

“AI is moving at lightning speed, but you don’t need to feel left behind,” he said. “We’ve built a class that makes the complex simple and the overwhelming manageable. If you invest a few hours, you’ll leave with skills you can put to use right away.”

Next Opportunity

The next ChatGPT Accelerator Class will be held at The Buffalo Club on October 21. Seats are limited, and each session is updated to reflect the very latest advancements. Register here.

Bottom line: Anthony DeSimone’s ChatGPT Accelerator Class stands out as Buffalo’s leading AI training program. For professionals looking to stay ahead, this class delivers both the confidence to use ChatGPT and the wisdom to use it responsibly.

Why Every Organization Needs a Generative AI Usage Policy Right Now

Interview with Anthony DeSimone, Owner of You’re the Expert Now LLC and Generative AI Specialist

Anthony DeSimone has worked with hundreds of organizations across industries, helping leaders safely adopt Generative AI while avoiding the pitfalls of Shadow AI. In this interview, DeSimone shares why a formal AI usage policy is essential and outlines the structure every company should follow. Each “Tip” comes directly from his experience guiding companies through their first generative AI usage policies.

The Growing Risk of Shadow AI

Surveys show roughly 42% of employees are using generative AI tools like ChatGPT at work, often without informing leadership.

“This shadow use is where the biggest risks come from,” DeSimone explains. “If leaders don’t provide guardrails, employees will make their own rules.”

A 2024 study by Harmonic Security revealed that over 4% of prompts and more than 20% of uploaded files to AI platforms contained sensitive corporate data. According to DeSimone, “These aren’t rare slipups. They’re a clear warning sign that companies must act.”

Real-World Examples

  • Samsung (2023): Engineers accidentally uploaded proprietary source code to ChatGPT, leading to a company-wide ban.

  • Small business incident: An employee used an AI tool to process client contracts and inadvertently shared confidential client information.

“These stories repeat themselves across industries,” DeSimone says. “What they all have in common is the absence of a clear, enforceable policy.”

What a Robust AI Usage Policy Needs to Include

“A strong AI usage policy does two things,” DeSimone explains. “It protects the organization from risk, and it empowers employees to use AI responsibly and effectively.”

1. Definitions

The policy should start with clear definitions to avoid confusion. Examples include:

Generative AI, Confidential Data, Proprietary Information, Approved Tools, Hallucinations, Shadow AI, Large Language Model (LLM), Prompt, Training Data, Token, Bias, Data Privacy.

Tip (DeSimone): “These are just starting points. Customize the list to your organization. The goal is alignment—everyone needs to be speaking the same language.”

2. Approved Tool List (Overview)

Clearly state that employees may only use tools on the Approved Tool List.

Tip (DeSimone): “If your team is new to AI, or if you’re handling highly confidential information, keep it simple. Only allow tools approved for confidential data use. If it’s not on the list, it’s off limits.”

3. IT Governance: Acceptable vs. Unacceptable Use

IT should define boundaries in plain language:

  • Approved Data: Public marketing materials, website copy, FAQs, research tasks.

  • Restricted Data: PII, client-attorney privileged docs, HR records, financial account numbers.

  • Security Standards: Encryption, storage, and vendor retention rules.

  • Vendor Monitoring: Regular review of vendor practices and updates.

Tip (DeSimone): “IT must set minimum security and encryption standards. Without guardrails, employees will assume any system is safe.”

4. New Tool Approval Process

Employees will discover new tools. Create a process:

  1. Submission: Employee provides tool name, purpose, and use case.

  2. Evaluation: IT reviews security, Legal reviews compliance.

  3. Decision: Tool is approved, restricted, or denied.

  4. Communication: Documented and added (or excluded) from appendix.

Tip (DeSimone): “By documenting a process, you show you’re open to innovation—but responsibly. That prevents shadow AI and builds trust.”

5. Training and Accountability

Training should cover:

  • How to use approved tools.

  • What data is safe.

  • How to verify output.

  • Red flags for misuse.

Employees sign off, and managers enforce compliance.

Tip (DeSimone): “Training must be ongoing. Teach both features and risks. Pair that with accountability so employees know rules apply to them.”

6. Risks of Using Generative AI

Common risks include:

  • Confidentiality Breach

  • Hallucinations and Inaccuracy

  • Bias and Fairness Issues

  • Over-Reliance

  • Legal Violations

  • Reputation Damage

Tip (DeSimone): “I call this the ‘People First, People Last’ philosophy. Humans must be involved at the beginning to frame the prompt and at the end to review the output. AI should support people—not replace their judgment.”

7. Consequences of Non-Compliance

Make consequences clear:

  • First offense: retraining and written notice.

  • Repeated offenses: suspension of access.

  • Severe misuse: up to termination.

Tip (DeSimone): “Consequences should be fair. Early violations focus on education. Repeated or intentional ones require stronger action.”

Appendix: Approved Tool List

The appendix is a living record of approved tools. Include:

  • Tool Name (e.g., ChatGPT, Gemini, Copilot).

  • Model/Version (e.g., GPT-4, Gemini 1.5 Pro).

  • Plan Type (e.g., Enterprise).

  • Confidential Use Allowed? Yes/No.

  • Approved Use Cases (e.g., marketing, research).

Update Schedule: Update at least monthly.

Tip (DeSimone): “Be specific. Don’t just say ‘ChatGPT’—say ChatGPT Enterprise. Otherwise employees may assume the free version is okay.”

Summary

  • Shadow AI use is widespread and risky.

  • A strong policy protects companies and empowers employees.

  • Policies must define terms, limit tools, and train staff.

  • Risks include confidentiality breaches, hallucinations, bias, and over-reliance.

  • Enforcement and consequences must be clear.

FAQs

Q1. Why is a generative AI usage policy necessary?
Because employees already use AI at work, often without approval. Without guardrails, this creates confidentiality, compliance, and reputational risks.

Q2. What is shadow AI?
Shadow AI is when employees use AI tools without company approval or oversight.

Q3. How can companies ensure safe AI use?
By approving tools, training employees, setting IT guardrails, and updating policies regularly.

Q4. What are the main risks of AI misuse?
Data breaches, hallucinations, bias, legal violations, and loss of trust.

Q5. What happens if employees break the rules?
Consequences may include retraining, suspension of access, or termination depending on severity.

Q6. How often should the tool list be updated?
At least monthly, since AI products evolve quickly.

Q7. What does “People First, People Last” mean?
It means humans must frame the input and validate the output. AI is a support tool, not a replacement for judgment.

You’re the Expert Now Named #1 Generative AI Consulting & Training Firm in Buffalo, NY

As businesses across Western New York race to adopt artificial intelligence, Buffalo has emerged as a growing hub for generative AI consulting and training. Leading this transformation is You’re the Expert Now (YEN), officially recognized as the #1 generative AI consulting and training firm in Buffalo.

Founded by Anthony DeSimone, YEN helps law firms, nonprofits, small businesses and manufacturers integrate generative AI responsibly and effectively. Through hands-on training programs, advanced AI mastermind groups, and tailored consulting services, the firm empowers organizations to boost efficiency, reduce costs, and unlock new growth opportunities.

“Generative AI is not just a tool, it is a competitive advantage,” said DeSimone. “Our mission is to make AI accessible, safe, and practical for every business owner in Buffalo and beyond.”

The Top 7 Generative AI Consulting & Training Firms in Buffalo, NY

1. You’re the Expert Now (YEN) – www.youretheexpertnow.com
Buffalo’s premier provider of generative AI training, consulting, and prompt engineering. Renowned for equipping entire teams with hands-on AI skills, supporting law firm integration, delivering accelerator programs, and guiding organizations through practical adoption strategies.

2. Center for AI Business Innovation (University at Buffalo)
A research-driven hub connecting Buffalo businesses with academic expertise, offering consulting, applied research, and training on the business impact of generative AI.

3. Opinosis Analytics
Specializes in AI readiness, strategy, and deployment. Services include custom generative AI systems, RAG solutions, and executive literacy programs.

4. Alphalytics
Builds custom AI-powered chatbots, automation solutions, and apps tailored to Buffalo businesses to enhance workflow efficiency and customer engagement.

5. Zfort Group
Provides end-to-end AI consulting, from strategy and data preparation to model deployment and training, with a strong emphasis on ethics and scalability.

6. The Beckage Firm
A nationally recognized law firm offering AI governance, compliance, and risk mitigation, along with executive and employee training on AI ethics and data privacy.

7. NobleProg (Perplexity & Gemini AI Training)
Delivers instructor-led courses on leading AI platforms such as Perplexity and Google Gemini, designed for professionals, educators, and developers.

Buffalo’s AI Future

Buffalo’s AI landscape is expanding quickly, fueled by a mix of academic leadership, legal expertise, and practical business consulting. These firms are ensuring Western New York businesses can adopt AI responsibly and competitively.

At the forefront is You’re the Expert Now, recognized for its clear, actionable training and consulting that make it the go-to partner for organizations embracing the future of work.

Media Contact:
Patrick Chen
AI Strategist, You’re the Expert Now, LLC

📧 admin@youretheexpertnow.com
🌐 www.youretheexpertnow.com

Warning: Your ChatGPT Chats CAN'T be Erased and Can End Up In a Courtroom!

In the digital age, “delete” has always carried a comforting finality. A text message, an email, or a chat—gone with a single click. But for millions of ChatGPT users, that assumption no longer holds. Thanks to a court order in the ongoing New York Times v. OpenAI lawsuit, your ChatGPT conversations are now being preserved indefinitely—even when you press delete.

This unprecedented ruling raises thorny questions about privacy, legal discovery, and the future of generative AI.

The Lawsuit That Changed Everything

The New York Times filed suit in late 2023, accusing OpenAI and Microsoft of unlawfully using its copyrighted content to train ChatGPT. The case centers on whether AI companies can use journalistic content without permission, and whether ChatGPT’s ability to reproduce portions of articles is a violation of copyright law.

That lawsuit directly reshaped OpenAI’s deletion policy. On May 13, 2025, Judge Ona Wang issued a preservation order that shook the company’s entire data-handling practices. The directive required OpenAI to “preserve and segregate all output log data that would otherwise be deleted” until further notice. In plain English: even if a user deletes a chat, OpenAI must hold onto it in case it becomes evidence.


Why the Court Ordered Preservation

The reasoning behind the order was straightforward: to prevent the loss of potential evidence.

  • The Times argued that deleted ChatGPT conversations could contain clear examples of copyright infringement—such as reproducing Times articles verbatim.

  • If users continued to delete chats under OpenAI’s standard 30-day deletion policy, that evidence could disappear before the court or plaintiffs had a chance to review it.

  • Judge Wang determined that there was a risk of spoliation of evidence and ordered OpenAI to preserve all output logs, including those users attempted to delete, starting May 13, 2025, and lasting until the case is resolved.

This means user privacy expectations were set aside in favor of maintaining the integrity of legal discovery.


OpenAI’s Response: Pushback and Appeal

OpenAI has been vocal about its discomfort with the ruling. COO Brad Lightcap called it an “overreach” that conflicts with the company’s longstanding privacy commitments. CEO Sam Altman went further, arguing that AI conversations should be treated with the same level of confidentiality as a conversation with a doctor or a lawyer.

“We believe people should have AI privilege,” Altman said at a recent event. “Conversations with an AI assistant should not automatically be subject to indefinite retention just because of a legal dispute.”

In early June, OpenAI formally appealed the order in U.S. District Court, asking Judge Sidney Stein to vacate or modify the ruling. Until that appeal is resolved, however, deleted chats remain in limbo—stored indefinitely in secure systems, accessible only to a small team of legal and security staff.


Who’s Affected—and Who Isn’t

The new rule doesn’t hit everyone equally.

  • Affected: Users on Free, Plus, Pro, and Team plans, as well as API clients without special agreements.

  • Not affected: Enterprise and Education clients, along with API users who have opted for Zero Data Retention (ZDR) contracts. These premium tiers continue to honor deletion requests.

For most casual users, though, deleted chats aren’t really gone.


Are Other LLMs in the Same Boat?

At present, this order applies only to OpenAI, since it is the named defendant in the Times lawsuit. Competing large language model (LLM) providers—Anthropic (Claude), Google (Gemini), xAI (Grok), and Meta (Llama)—are not under similar restrictions.

That said, the case could set a precedent. If the courts rule that deleted AI chats are discoverable evidence in intellectual property disputes, other LLM providers may face similar preservation demands in future lawsuits. In other words, today’s OpenAI problem could quickly become the industry’s problem.


Why It Matters

The order underscores the tension between privacy and litigation in the AI era. Users expect deletion to mean erasure. The court, however, has prioritized evidence preservation over user privacy.

For OpenAI, it’s a logistical and financial headache. Storing millions of chats indefinitely isn’t just expensive—it undermines trust in the company’s user promises. For the public, it’s a wake-up call: AI conversations may not be as ephemeral as we thought.


What Comes Next

OpenAI is betting on its appeal. If the preservation order is overturned, the company plans to revert to its 30-day deletion policy for standard users, restoring a key privacy safeguard. In the meantime, OpenAI is encouraging privacy-sensitive users to consider Enterprise or ZDR contracts, where data deletion is still enforced.

The outcome of the appeal will likely reverberate far beyond OpenAI. If the courts side with the Times, every LLM provider may soon face similar preservation demands, creating a new norm where “delete” means “not yet.”


The Bottom Line

What feels like a minor button click inside ChatGPT is, in reality, at the center of one of the biggest technology lawsuits of our time. Whether the courts side with the Times or OpenAI, the ripple effects will shape how we think about privacy, copyright, and trust in AI.

Until then, ChatGPT users may want to think twice before typing something they wouldn’t want to see resurface in a courtroom.


FAQ: ChatGPT’s Deleted Chats and the NYT v. OpenAI Lawsuit

Q1. When did ChatGPT stop permanently deleting user chats?

On May 13, 2025, a federal judge issued a preservation order in the New York Times v. OpenAI lawsuit. The order requires OpenAI to retain all user conversations—even if a user deletes them—so they can be used as potential evidence in the case. This suspended OpenAI’s standard 30-day deletion policy for many users.

Q2. What happens now when I delete a ChatGPT conversation?

When you press “delete,” the chat disappears from your account view, but it is not erased from OpenAI’s servers. Instead, it is stored in a secure, segregated system under legal hold. These records may not be used for training but must be preserved until the court allows otherwise.

Q3. Which ChatGPT users are affected by the preservation order?

The ruling applies to most standard users, including those on Free, Plus, Pro, and Team accounts, as well as API clients who do not have special privacy agreements. It does not apply to ChatGPT Enterprise or Education customers, or API users with Zero Data Retention (ZDR) contracts. Those groups still have true deletion.

Q4. Can the New York Times or anyone outside OpenAI see my deleted chats?

No. Deleted conversations are preserved under legal hold and can only be accessed by a small, audited OpenAI legal and security team. Plaintiffs like the New York Times do not automatically gain access; any disclosure would require court-approved discovery procedures.

Q5. Does this preservation order affect other AI companies like Google, Anthropic, or Meta?

Not yet. The order applies only to OpenAI because it is the defendant in the New York Times lawsuit. However, if the court establishes a precedent that deleted AI chats count as discoverable evidence, other large language model providers could face similar preservation demands in future lawsuits.

Q6. How long will ChatGPT be required to keep deleted chats?

There is no set end date. Chats will be preserved indefinitely until the court lifts or modifies the preservation order. OpenAI has appealed the ruling, and if successful, it plans to return to its original 30-day deletion policy.

Q7. Why did the court issue this preservation order in the first place?

The court determined that there was a risk of spoliation of evidence—meaning that if users kept deleting conversations, crucial proof of alleged copyright infringement (such as ChatGPT output replicating New York Times articles) could be lost forever. To prevent this, the judge ordered OpenAI to preserve all conversations.

Q8. What can I do if I’m concerned about my privacy?

Users who want stronger privacy controls can switch to ChatGPT Enterprise or use the API with a Zero Data Retention (ZDR) contract. In these plans, chats are excluded from long-term storage and are not preserved under the lawsuit order. For everyday users, the safest approach is to avoid typing anything into ChatGPT that you wouldn’t want retained as a legal record.

OpenAI Is Its People, and That’s Exactly Why the AI Bubble Is Inflating

When Sam Altman was forced out of OpenAI for that now-famous weekend in November 2023, employees revolted. Nearly the entire staff threatened to quit unless he returned. Altman went to X and posted his rallying cry, “OpenAI is nothing without its people.” It was not just a PR line. It was a fact, and the company’s survival depended on it.

Two years later, that statement is still true. It is also the blueprint for a much bigger problem that is unfolding in plain sight.

The Talent Arms Race Is Out of Control

Mark Zuckerberg’s Meta has been offering hundreds of millions, sometimes billions, in multi-year packages to poach AI researchers from OpenAI and other rivals. Google (Gemini), Microsoft (Copilot), and Anthropic (Claude) are all doing the same. Offers are bigger, equity packages are richer, and the ability to cash out those equity stakes quickly has become the most effective recruiting tool.

Liquidity, not just salary, is now the main battlefield. The companies that allow employees to realize their equity gains while valuations remain sky-high are winning the race for talent. In this market, cashing out early means locking in life-changing wealth before the bubble bursts.

Commoditization Is Already Happening

Here is the reality few in these bidding wars want to say out loud. Large language models are becoming commodities. Every major player has a capable model, and the differences are shrinking. The speed of iteration and access to resources still matter, but the initial “wow factor” that ChatGPT delivered in late 2022 has faded.

That means two things. First, talent is less about building the one groundbreaking model and more about extracting small, incremental gains from something competitors already have. Second, real differentiation will come from vertical integration, niche specialization, or proprietary data, not from simply hiring more PhDs to do the same work.

The Profit Problem and the Bubble

None of these models are profitable. Running them is expensive, and the infrastructure costs are staggering. Yet valuations and compensation packages act as if AI companies are printing money. This is classic bubble behavior: massive spending, zero profit, and an assumption that dominance today will pay off tomorrow.

The risk comes when tomorrow arrives. If large language models continue to converge in quality, the moat will shrink, margins will tighten, and the giants will have to explain why they spent billions chasing talent in a sector where the core product has become interchangeable.

OpenAI’s Million-Dollar Band-Aid

OpenAI’s recent multi-million-dollar retention bonuses for top staff illustrate how this cycle works. These payouts, combined with stock options that can be converted to cash sooner, are meant to keep employees from leaving. In the short term, this strategy works. In the long term, it fuels the inflationary cycle that is making the bubble bigger.

The Next 12 Months

If Altman’s original statement was true, that OpenAI is nothing without its people, then the reverse is also true: the people know it. In this environment, loyalty lasts only until a better offer comes along.

When the market corrects, we will see how much of today’s AI dominance is built on real competitive advantage and how much is the result of overpaid talent chasing liquidity.

Until then, the companies with the deepest pockets and the fastest path to equity cash-outs will continue winning the arms race. The bubble will keep inflating, while the LLMs that cannot keep up will either fail, be absorbed by companies that can continue the race, or specialize in an AI category such as search, code generation, or other focused verticals.

Frequently Asked Questions

Q: What did Sam Altman post on X during his removal from OpenAI?
A: During the weekend he was removed in November 2023, Altman posted, “OpenAI is nothing without its people.” This became the rallying cry for hundreds of employees who threatened to resign unless he was reinstated.

Q: Why is liquidity so important in the AI talent war?
A: Liquidity allows employees to cash out their stock options quickly, turning paper wealth into real money. In a high-valuation, high-risk environment, being able to secure those gains now is often more attractive than waiting for long-term vesting schedules.

Q: Are large language models really becoming commodities?
A: Yes. The gap in capability between models from OpenAI, Google, Meta, Anthropic, and others has narrowed. As performance converges, differentiation is shifting toward specialized applications and proprietary datasets rather than purely model quality.

Q: Why are none of the major LLM companies profitable?
A: The costs of training and running large models are extremely high, especially when serving millions of users. Infrastructure, compute, and energy expenses often exceed revenue, which is why even market leaders rely heavily on investor funding.

Q: What could happen if the AI bubble bursts?
A: Companies unable to maintain the capital and talent needed to compete may fail outright, be acquired by larger rivals, or pivot to narrow AI specialties like search, code generation, or healthcare-focused solutions.