AI Vision

The Rifoa Story

Discover the vision behind Rifoa and how we're building intelligence infrastructure for the post-scarcity economy. Learn about our journey, philosophy, and the future of AI-powered business transformation.

01/23/2026
25 min read
The Rifoa Story

What You'll Discover

  • The real story behind why Rifoa was founded and our core mission
  • How AI is transforming from "talkers" to "doers" in business
  • The future of work and the post-scarcity economy
  • Why context engineering is the critical skill of the AI era

Why I Started This

I'm going to be honest with you. I didn't start Rifoa because I saw a gap in the market or because some business guru told me there was opportunity. I started it because I got tired of watching people waste their lives on work that shouldn't exist anymore.

You don't even need to be a computer scientist to capitalize on what's happening right now. Businesses are desperate. Small companies have no clue how AI can help them, but they know they're falling behind. You just need to understand the business problem and know which AI tool solves it. That's the whole game.

The Problem Nobody Wants to Talk About

Here's what I realized working in banking and tech: most phone calls to customer service weren't about features in the app. Thanks to customer awareness, people calling had actual problems, unique ones the app couldn't solve. We had to remove the AI option from calls, not because AI was problematic, but because the company understood the problem badly.

My team did the analysis. We figured out the real use cases. Like when customers couldn't pay because their card hit the monthly limit. Just making that obvious before they got a generic "transaction failed" message changed everything. Customer satisfaction went up. These things require deep thinking about the actual problem.

I've seen executives misdiagnose issues. They tell you one problem, but when you do a deep dive, you find something completely different. That's what fascinates me. That's the work I love.

What AGI Actually Means

Let me give you a functional definition of AGI that cuts through all the philosophy: AGI is the ability to figure things out. That's it.

You want something done? You need someone (or something) that can just figure stuff out. How it happens matters less than the fact that it happens.

A human who can figure things out has baseline knowledge, the ability to reason over that knowledge, and the ability to iterate their way to the answer. An AI that can figure things out has baseline knowledge (pre-training), the ability to reason (inference-time compute), and the ability to iterate (long-horizon agents).

The Three Components of AGI

Baseline Knowledge

Pre-training on vast amounts of data (2022)

Reasoning Ability

Inference-time compute for complex reasoning (late 2024)

Iteration Capability

Long-horizon agents that can work autonomously (2026)

From Talkers to Doers

The AI applications of 2023 and 2024 were talkers. Some were sophisticated conversationalists, sure. But their impact was limited.

The AI applications of 2026 and 2027 will be doers. They'll feel like colleagues. Usage will go from a few times a day to all day, every day, with multiple instances running in parallel. Users won't save a few hours here and there. They'll go from working as an individual contributor to managing a team of agents.

Remember all that talk about selling work instead of software? Now it's actually possible.

My Experience with AI Agents

I've had an LLM living inside my repositories. Writing features, refactoring logic, fixing race conditions, testing edge cases. It scans the codebase, points out inconsistencies, proposes cleaner abstractions. It debugs async issues, rewrites functions when state handling gets messy, flags performance problems before they show up in production.

It keeps a running mental model of what the system should be, updates docs as things change, helps reason through tradeoffs when I'm unsure which direction to take. It makes mistakes. Sometimes subtle, sometimes architectural. I've had to correct assumptions, reject over-engineered solutions, simplify things it tried to generalize too early.

I'm still fully in the loop. Design decisions are mine. Taste is mine. Direction is mine. But the experience of having something continuously grinding through implementation details, sanity-checking ideas, accelerating feedback cycles feels like a real shift in how work gets done.

The Real Skill Now

It may not be a skill issue, but a context issue. I've found that AI goes off the rails more often when the codebase is internally inconsistent. If the codebase consistently follows well-defined rules and conventions, the AI follows them too, pretty much every time.

You need to be knowledgeable about core software engineering dynamics, then set up scaffolding in your repos that allows coding agents to operate smoothly. Spec-driven development, test-driven development, domain-driven development, containerization, parameterized tests, behavior-driven development, design patterns. These are advanced topics that aren't usually taught in detail at school, even at the graduate level.

Someone recently made a great point: context engineering is the future. You're reverse engineering what an insanely smart human would need to perform a particular task. The caveat? This super smart person is an expert at almost any field of work, but one day they're a lawyer at a Fortune 500 and the next day they're an engineer at a startup. They forget what they did between each task. They can only keep track of one medium-sized thing at a time.

Context Engineering Components

  • • Search and retrieval systems
  • • Heuristics for ranking information
  • • System prompts and context management
  • • Work tracking to save context window space
  • • Processing vastly more data than humans can handle

What Rifoa Actually Does

Here's the truth: off-the-shelf LLMs alone are basically useless for real enterprise value. Alex Karp from Palantir said it best: you need custom orchestration on top, something your company actually speaks and understands. That's the whole game. Not just throwing GPT or Claude at problems, but building the glue that makes AI work in messy, regulated, high-stakes environments.

Most companies are still in the "let's prompt ChatGPT" phase and wondering why ROI sucks. The moat isn't the model. The moat is the plumbing.

Rifoa adapts off-the-shelf LLMs to your domain and enterprise language. That's where you can get value. We're not selling you AI. We're selling you automation of work you've always wanted to do but couldn't afford the time or cost.

How Enterprise AI Really Works

At the enterprise level, no one uses these models as one-shots. You don't just connect the API, feed it a question, expect it to be right. Companies that do that are the ones saying "AI is bullshit, it failed to deliver."

Instead, you create multiple AIs known as agents who work together as a team to solve problems. When I send out a request, I'm not expecting it to shoot out complicated answers in one go. It does a first draft, gets challenged by other AIs, helped by another, guided by another. They keep doing this, working together, until it gets a satisfactory result.

These are the powerful private AI systems at massive corporations, and they aren't sharing them because that's their secret sauce. They don't want to share with the competition. These are the companies laying off 30% of their staff because they've internally created great agent systems.

The Long-Horizon Agent Exponential

If there's one exponential curve to bet on, it's the performance of long-horizon agents. The rate of progress is exponential, doubling every 7 months. If we trace out the exponential, agents should be able to work reliably to complete tasks that take human experts a full day by 2028, a full year by 2034, and a full century by 2037.

Soon you'll be able to hire an agent. That's one litmus test for AGI.

The Agent Capability Timeline

  • • 2026: Agents work reliably for 30 minutes
  • • 2028: Agents complete a full day's work
  • • 2034: Agents accomplish a full year's work
  • • 2037: Agents achieve a full century's worth of work

What can you achieve when your plans are measured in centuries? A century is 200,000 clinical trials no one's cross-referenced. A century is every customer support ticket ever filed, finally mined for signal. A century is the entire U.S. tax code, refactored for coherence.

The ambitious version of your roadmap just became the realistic one.

Why Context Engineering Matters

We'll soon get to a point where almost any time something doesn't work with an AI agent in a reasonably sized task, you'll be able to point to a lack of the right information that the agent had access to.

This is why context engineering is critical. You're reverse engineering what an insanely smart human would need to perform a particular task. The caveat: they're an expert at almost any field, but they forget what they did between tasks and can only track one medium-sized thing at a time.

The Shift Is Already Happening

Aggressively JIT your work. It's not about the task at hand. It's a little bit about the task but mostly about how you should have contributed almost no latency and almost no actions. It's digital factory time.

Instead of doing tasks, you start designing pipelines. Your notes should feed your code. Your code should feed your experiments. Your experiments should feed dashboards. Your dashboards should feed decisions.

Human input only happens at decision points, not execution.

Why This Matters

Latency compounds. Manual steps scale linearly with workload. Automated pipelines scale near zero marginal cost. The highest leverage work isn't completing tasks faster. It's redesigning the system so the task almost doesn't exist.

The Pattern We Keep Seeing

Engines, steam engines, were invented in 1700. What followed was 200 years of steady improvement, with engines getting 20% better per decade. For the first 120 years of that steady improvement, horses didn't notice at all. Then, between 1930 and 1950, 90% of the horses in the US disappeared.

Progress in engines was steady. Equivalence to horses was sudden.

Computer chess improved by 50 Elo per year for decades. In 2000, a human grandmaster could expect to win 90% of games against a computer. Ten years later, the same grandmaster would lose 90% of games against a computer.

Progress in chess was steady. Equivalence to humans was sudden.

Now look at AI. Someone at Anthropic shared this: back in 2024, old-timers were answering about 4,000 new-hire questions a month. Then in December, Claude finally got good enough to answer some of those questions. Six months later, 80% of the questions had disappeared. Claude was now answering 30,000 questions a month, eight times as many as humans ever did.

While it took horses decades to be overcome, and chess masters years, it took six months to be surpassed. Surpassed by a system that costs one thousand times less. A system that costs less per word thought or written than it would cost to hire the cheapest human labor on the face of the planet.

What This Means for Jobs

There's going to be new jobs, lots and lots of them. AI researchers talk about the singularity, an intelligence explosion. But when you look at the data, you see something else happening in parallel: a curve of rapidly accelerating job creation. I call this the "job singularity." A Cambrian explosion, not just of new jobs, but of entirely new job families across almost every imaginable field.

The internet gave people worldwide reach. AI gives them a world-class staff.

Future Job Trends

  • • Micro corporations and solo institutions
  • • Single-person unicorns
  • • Entirely new job families
  • • Jobs that don't look like "real work" to us
  • • Entrepreneurial explosion driven by AI staff

The Rifoa Philosophy

My background and hobby goal: automate end to end. Not just one use case or one department. Use case by use case for now, but with a vision of complete automation.

When I talk to CTOs or CFOs, they want specific things. I mention what other executives wanted, and readers think: "Holy shit, I'm trying to do the same thing."

People are willing to pay a smart person to solve their problem, rather than just buying a tool. This is demand-side selling: understanding what progress people want to make and what they're willing to pay to make that progress. Your product or service is merely part of their solution. You create pull for your product by focusing on helping the customer.

My Positioning

AI solves problems people already have, rather than being a horizontal tool for any use case. Nobody wants AI for AI's sake. They want to accomplish things they've always wanted to do. AI just helps them achieve it easier, faster, cheaper, or better than existing options.

The Business Model Innovation

Here's what I realized: we pay 20,000 AED for a mid-level manager. Why not pay that to an AI company that's far more productive?

AI can now solve search, email, PDF tasks. Why not automate them? With good human-in-the-loop, you can automate 99% of processes now. The human is there only for the 10% that requires reliability. After a few years, even that human may not be needed.

Business Model Warning

If you can't change your business model, you will fail. Little pain now avoids much bigger pain later.

Why Abu Dhabi, Why Now

I'm starting in Abu Dhabi and expanding out to the rest of the developing world. Geopolitics matters. Qatar Brookfield has a 17B data center, a sovereign data center. In the cold war between China and the USA, each will have their own data systems. The Middle East becomes a critical third pole.

Getting into robotics to increase adoption when robotics reaches baseline performance around 2027. Fully automating a construction company end to end. That's the plan.

The Human Element

Most of these AI benchmarks are meaningless for the average user anyway, and even for advanced users, because people aren't going to use AI for textbook questions or quizzes.

Output quality still depends heavily on input quality. That was true two years ago, and it's still true for today's bleeding-edge models. Always test these models for your specific use case. It's going to be subjective anyway.

These are still early days, so I do the sales, set up the infrastructure, handle the maintenance. This is the kind of work I enjoy. Until I find someone else who finds this line of work interesting, cares, and has passion, I don't mind doing it myself. I don't want to be a CEO just ordering people around. I want to get my hands technical and dirty. I love problem-solving. It's how I get into a flow state.

The Vision: Minimal Suffering for All

My ultimate goal is reducing suffering to minimal for all. Giving youngsters and every person amazing, fulfilling jobs. Revenue sharing for all employees. A new kind of business.

I'm committing fully to the future I want to see and doing my part in willing it to happen.

What's Actually Possible Now

The strongest teams come prepared with clear answers: Do you train on my data? How are prompts and outputs logged? What are the safeguards against hallucinations?

AI-native success teams work directly with customers to design prompt logic and data integrations while passing feedback to GTM and product teams. This role becomes especially critical in the post-sales process as a core point of customer interaction.

Sales teams spend less time doing data entry and account research, especially with modern tools. Reps can now spend more time with the customer, understanding their specific needs deeply.

One annoyance people feel: doing repetitive things that technology can now automate, like creating email drafts for tenders. This speeds things up, just like how the stock market became robotized.

Another positioning I take: asking employees about ideas they've always wanted to automate. They probably have things in their head that they could start automating now. Or I look at their workflows, find tasks that are quick to automate and bring high ROI, the low-hanging fruits.