Back to Blog
AI
21 min read

How to Add AI to Your React Native App in 2026

Paweł Karniej·February 2026

How to Add AI to Your React Native App in 2026

February 2026

The vibecoding era is here.

AI isn't just for web apps anymore. React Native developers can now integrate OpenAI, Claude, Whisper transcription, and image generation directly into mobile apps. I've done this across 10+ apps over the past year, and I'm going to show you exactly how.

Table of Contents

  • Why AI in Mobile Apps Now
  • The Architecture That Works
  • OpenAI Integration (GPT-4, DALL-E, Whisper)
  • Secure API Key Management
  • Building AI Chat Features
  • Image Generation in React Native
  • Voice Transcription with Whisper
  • Monetizing AI Features
  • Performance and UX Considerations
  • Real Examples from My Apps
  • Why AI in Mobile Apps Now?

    Two years ago, adding AI to a mobile app meant building complex backends, managing queues, and dealing with unreliable APIs. In 2026, it's different:

    What changed:

    • OpenAI API is stable and fast
    • Edge functions make backend setup trivial
    • React Native performance can handle real-time AI
    • Users expect AI features (it's not a novelty anymore)

    The opportunity:

    While everyone's building web AI wrappers, mobile AI apps are still underserved. YapperX (my voice memo app) does AI transcription better than desktop apps.

    The proof:

    • BeatAI: AI music practice coaching
    • VidNotes: AI video summarization on mobile
    • Newsletterytics: AI newsletter analysis
    • YapperX: Whisper transcription with AI insights

    All React Native. All shipping.

    The Architecture That Works

    After building AI features in 6+ apps, here's the architecture I standardized on:

    React Native App
        ↓ (HTTP requests)
    Convex Functions
        ↓ (secure API calls)
    OpenAI/Replicate/ElevenLabs APIs

    Why this works:

    • Your API keys never touch the client
    • Edge functions are globally distributed (low latency)
    • Convex handles auth/database integration
    • You can implement usage tracking and rate limiting

    Alternative architectures I tried (and why they failed):

  • Direct API calls from React Native
  • - Problem: API keys exposed in client

    - Security risk: Anyone can decompile and steal keys

  • Traditional backend server
  • - Problem: Cold starts and maintenance overhead

    - Cost: $50+/month vs $0-5 with edge functions

  • Firebase Functions
  • - Problem: Vendor lock-in and limited flexibility

    - Performance: Slower than Convex functions

    OpenAI Integration Step-by-Step

    Here's the exact setup I use in all my apps:

    1. Convex Function Setup

    // convex/functions/ai-chat/index.ts
    import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
    import { createClient } from 'https://esm.sh/@convex/convex-js@2'
    
    const openaiApiKey = Deno.env.get('OPENAI_API_KEY')
    const convexUrl = process.env.CONVEX_URL
    const convexServiceKey = process.env.CONVEX_DEPLOY_KEY
    
    serve(async (req) => {
      if (req.method === 'OPTIONS') {
        return new Response('ok', { headers: corsHeaders })
      }
    
      try {
        const { messages, userId } = await req.json()
        
        // Initialize Convex client
        const convex = createClient(convexUrl!, convexServiceKey!)
        
        // Check user credits/limits
        const { data: user, error } = await convex
          .from('users')
          .select('ai_credits')
          .eq('id', userId)
          .single()
        
        if (user.ai_credits <= 0) {
          return new Response(
            JSON.stringify({ error: 'No credits remaining' }),
            { headers: { ...corsHeaders, 'Content-Type': 'application/json' }, status: 402 }
          )
        }
    
        // Call OpenAI
        const response = await fetch('https://api.openai.com/v1/chat/completions', {
          method: 'POST',
          headers: {
            'Authorization': Bearer ${openaiApiKey},
            'Content-Type': 'application/json',
          },
          body: JSON.stringify({
            model: 'gpt-4',
            messages: messages,
            max_tokens: 500,
            stream: false,
          }),
        })
    
        const data = await response.json()
        
        // Deduct credit
        await convex
          .from('users')
          .update({ ai_credits: user.ai_credits - 1 })
          .eq('id', userId)
    
        return new Response(
          JSON.stringify({ message: data.choices[0].message.content }),
          { headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
        )
      } catch (error) {
        return new Response(
          JSON.stringify({ error: error.message }),
          { headers: { ...corsHeaders, 'Content-Type': 'application/json' }, status: 500 }
        )
      }
    })

    2. React Native Client Code

    // hooks/useAIChat.ts
    import { useState } from 'react'
    import { convex } from '../lib/convex'
    import { useAuth } from './useAuth'
    
    interface Message {
      role: 'user' | 'assistant'
      content: string
    }
    
    export const useAIChat = () => {
      const [messages, setMessages] = useState<Message[]>([])
      const [loading, setLoading] = useState(false)
      const { user } = useAuth()
    
      const sendMessage = async (content: string) => {
        if (!user) return
    
        const newMessage: Message = { role: 'user', content }
        const updatedMessages = [...messages, newMessage]
        setMessages(updatedMessages)
        setLoading(true)
    
        try {
          const { data, error } = await convex.action('ai-chat', {
            body: { 
              messages: updatedMessages,
              userId: user.id 
            }
          })
    
          if (error) throw error
    
          const aiMessage: Message = { role: 'assistant', content: data.message }
          setMessages(prev => [...prev, aiMessage])
        } catch (error) {
          console.error('AI chat error:', error)
          // Handle error (show toast, etc.)
        } finally {
          setLoading(false)
        }
      }
    
      return { messages, sendMessage, loading }
    }

    3. Chat UI Component

    // components/AIChatScreen.tsx
    import React, { useState } from 'react'
    import { View, Text, TextInput, TouchableOpacity, ScrollView } from 'react-native'
    import { useAIChat } from '../hooks/useAIChat'
    
    export const AIChatScreen = () => {
      const [input, setInput] = useState('')
      const { messages, sendMessage, loading } = useAIChat()
    
      const handleSend = () => {
        if (input.trim()) {
          sendMessage(input)
          setInput('')
        }
      }
    
      return (
        <View style={{ flex: 1 }}>
          <ScrollView style={{ flex: 1, padding: 16 }}>
            {messages.map((message, index) => (
              <View 
                key={index}
                style={{
                  alignSelf: message.role === 'user' ? 'flex-end' : 'flex-start',
                  backgroundColor: message.role === 'user' ? '#007AFF' : '#F0F0F0',
                  padding: 12,
                  borderRadius: 8,
                  marginBottom: 8,
                  maxWidth: '80%'
                }}
              >
                <Text style={{ 
                  color: message.role === 'user' ? 'white' : 'black' 
                }}>
                  {message.content}
                </Text>
              </View>
            ))}
          </ScrollView>
          
          <View style={{ flexDirection: 'row', padding: 16 }}>
            <TextInput
              value={input}
              onChangeText={setInput}
              placeholder="Type a message..."
              style={{
                flex: 1,
                borderWidth: 1,
                borderColor: '#DDD',
                borderRadius: 8,
                padding: 12,
                marginRight: 8
              }}
            />
            <TouchableOpacity 
              onPress={handleSend}
              disabled={loading}
              style={{
                backgroundColor: '#007AFF',
                padding: 12,
                borderRadius: 8,
                justifyContent: 'center'
              }}
            >
              <Text style={{ color: 'white' }}>
                {loading ? '...' : 'Send'}
              </Text>
            </TouchableOpacity>
          </View>
        </View>
      )
    }

    Secure API Key Management

    Never put API keys in your React Native code. Ever.

    Here's what I learned the hard way:

    Wrong Approaches:

    // DON'T DO THIS
    const OPENAI_API_KEY = 'sk-...' // Exposed in bundle
    const config = {
      openai: process.env.OPENAI_API_KEY // Still bundled
    }

    Right Approach:

  • Store API keys in Convex environment variables
  • Create edge functions that proxy requests
  • Use Convex Auth to verify requests
  • Environment setup:

    # In Convex dashboard > Settings > Environment Variables
    OPENAI_API_KEY=sk-your-key-here
    ELEVENLABS_API_KEY=your-key-here
    REPLICATE_API_TOKEN=your-token-here

    Building AI Chat Features

    The chat implementation above handles the basics. Here are advanced patterns I use:

    Streaming Responses

    // Modified edge function for streaming
    const stream = new ReadableStream({
      async start(controller) {
        const response = await fetch('https://api.openai.com/v1/chat/completions', {
          method: 'POST',
          headers: {
            'Authorization': Bearer ${openaiApiKey},
            'Content-Type': 'application/json',
          },
          body: JSON.stringify({
            model: 'gpt-4',
            messages: messages,
            stream: true,
          }),
        })
    
        const reader = response.body?.getReader()
        if (!reader) return
    
        while (true) {
          const { done, value } = await reader.read()
          if (done) break
          
          controller.enqueue(value)
        }
        controller.close()
      }
    })
    
    return new Response(stream, {
      headers: { ...corsHeaders, 'Content-Type': 'text/plain' }
    })

    Context Management

    const useAIChatWithContext = (systemPrompt: string) => {
      const contextualMessages = useMemo(() => [
        { role: 'system', content: systemPrompt },
        ...messages
      ], [messages, systemPrompt])
    
      // Rest of implementation...
    }

    Usage Tracking

    // Database schema for tracking
    CREATE TABLE ai_usage (
      id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
      user_id uuid REFERENCES auth.users NOT NULL,
      feature_type text NOT NULL, -- 'chat', 'image', 'transcription'
      tokens_used integer,
      cost_cents integer,
      created_at timestamp with time zone DEFAULT now()
    );

    Image Generation in React Native

    Adding DALL-E or Flux image generation to React Native:

    Edge Function for Image Generation

    // convex/functions/ai-image/index.ts
    serve(async (req) => {
      const { prompt, userId, model = 'dall-e-3' } = await req.json()
    
      // Check credits
      const { data: user } = await convex
        .from('users')
        .select('image_credits')
        .eq('id', userId)
        .single()
    
      if (user.image_credits <= 0) {
        return new Response(JSON.stringify({ error: 'No image credits' }), {
          status: 402,
          headers: corsHeaders
        })
      }
    
      try {
        let imageUrl: string
    
        if (model === 'dall-e-3') {
          const response = await fetch('https://api.openai.com/v1/images/generations', {
            method: 'POST',
            headers: {
              'Authorization': Bearer ${openaiApiKey},
              'Content-Type': 'application/json',
            },
            body: JSON.stringify({
              model: 'dall-e-3',
              prompt,
              n: 1,
              size: '1024x1024',
              quality: 'standard'
            }),
          })
          
          const data = await response.json()
          imageUrl = data.data[0].url
        } else {
          // Use Replicate for Flux Pro
          const response = await fetch('https://api.replicate.com/v1/predictions', {
            method: 'POST',
            headers: {
              'Authorization': Token ${replicateApiToken},
              'Content-Type': 'application/json',
            },
            body: JSON.stringify({
              version: 'flux-pro-model-version',
              input: { prompt }
            }),
          })
          
          // Handle Replicate async response...
        }
    
        // Deduct credit
        await convex
          .from('users')
          .update({ image_credits: user.image_credits - 1 })
          .eq('id', userId)
    
        return new Response(JSON.stringify({ imageUrl }), {
          headers: corsHeaders
        })
      } catch (error) {
        return new Response(JSON.stringify({ error: error.message }), {
          status: 500,
          headers: corsHeaders
        })
      }
    })

    React Native Image Generation Hook

    export const useImageGeneration = () => {
      const [loading, setLoading] = useState(false)
      const [generatedImage, setGeneratedImage] = useState<string | null>(null)
      const { user } = useAuth()
    
      const generateImage = async (prompt: string, model: 'dall-e-3' | 'flux-pro' = 'dall-e-3') => {
        setLoading(true)
        try {
          const { data, error } = await convex.action('ai-image', {
            body: { prompt, userId: user?.id, model }
          })
    
          if (error) throw error
          
          setGeneratedImage(data.imageUrl)
          return data.imageUrl
        } catch (error) {
          console.error('Image generation error:', error)
          throw error
        } finally {
          setLoading(false)
        }
      }
    
      return { generateImage, generatedImage, loading }
    }

    Voice Transcription with Whisper

    Whisper integration for voice memos (like I use in YapperX):

    Audio Recording Hook

    import { Audio } from 'expo-av'
    import * as FileSystem from 'expo-file-system'
    
    export const useAudioRecording = () => {
      const [recording, setRecording] = useState<Audio.Recording>()
      const [isRecording, setIsRecording] = useState(false)
    
      const startRecording = async () => {
        try {
          const permission = await Audio.requestPermissionsAsync()
          if (permission.status !== 'granted') return
    
          await Audio.setAudioModeAsync({
            allowsRecordingIOS: true,
            playsInSilentModeIOS: true,
          })
    
          const { recording } = await Audio.Recording.createAsync(
            Audio.RecordingOptionsPresets.HIGH_QUALITY
          )
          setRecording(recording)
          setIsRecording(true)
        } catch (err) {
          console.error('Failed to start recording', err)
        }
      }
    
      const stopRecording = async () => {
        setIsRecording(false)
        setRecording(undefined)
        await recording?.stopAndUnloadAsync()
        await Audio.setAudioModeAsync({ allowsRecordingIOS: false })
        
        const uri = recording?.getURI()
        return uri
      }
    
      return { startRecording, stopRecording, isRecording }
    }

    Whisper Transcription Edge Function

    serve(async (req) => {
      const formData = await req.formData()
      const audioFile = formData.get('audio') as File
      const userId = formData.get('userId') as string
    
      // Convert to format Whisper expects
      const whisperFormData = new FormData()
      whisperFormData.append('file', audioFile)
      whisperFormData.append('model', 'whisper-1')
      whisperFormData.append('response_format', 'json')
    
      const response = await fetch('https://api.openai.com/v1/audio/transcriptions', {
        method: 'POST',
        headers: {
          'Authorization': Bearer ${openaiApiKey},
        },
        body: whisperFormData,
      })
    
      const data = await response.json()
      
      return new Response(JSON.stringify({ 
        transcription: data.text 
      }), {
        headers: corsHeaders
      })
    })

    React Native Transcription

    export const useTranscription = () => {
      const [transcribing, setTranscribing] = useState(false)
    
      const transcribeAudio = async (audioUri: string) => {
        setTranscribing(true)
        try {
          const formData = new FormData()
          formData.append('audio', {
            uri: audioUri,
            type: 'audio/m4a',
            name: 'recording.m4a',
          } as any)
          formData.append('userId', user?.id || '')
    
          const { data, error } = await convex.action('transcribe', {
            body: formData,
          })
    
          if (error) throw error
          
          return data.transcription
        } catch (error) {
          console.error('Transcription error:', error)
          throw error
        } finally {
          setTranscribing(false)
        }
      }
    
      return { transcribeAudio, transcribing }
    }

    Monetizing AI Features

    Here's how I monetize AI in my React Native apps:

    Credit-Based System

    // Database schema
    CREATE TABLE subscription_tiers (
      id text PRIMARY KEY,
      name text NOT NULL,
      ai_credits integer NOT NULL,
      image_credits integer NOT NULL,
      price_cents integer NOT NULL
    );
    
    INSERT INTO subscription_tiers VALUES 
    ('free', 'Free', 10, 3, 0),
    ('pro', 'Pro', 500, 100, 999),
    ('unlimited', 'Unlimited', -1, -1, 1999); -- -1 = unlimited

    RevenueCat Integration

    const useSubscription = () => {
      const [tier, setTier] = useState<'free' | 'pro' | 'unlimited'>('free')
    
      useEffect(() => {
        Purchases.addCustomerInfoUpdateListener(async (customerInfo) => {
          const activeEntitlements = customerInfo.entitlements.active
          
          if (activeEntitlements['unlimited']) {
            setTier('unlimited')
          } else if (activeEntitlements['pro']) {
            setTier('pro')
          } else {
            setTier('free')
          }
          
          // Update database
          await convex
            .from('users')
            .update({ subscription_tier: tier })
            .eq('id', user?.id)
        })
      }, [])
    
      return { tier }
    }

    Smart Paywall Placement

    Based on data from my apps:

    • Show paywall after 3 successful AI interactions
    • Don't interrupt mid-conversation
    • Offer credits as one-time purchase + subscription

    Performance and UX Considerations

    Loading States

    const AIFeature = () => {
      const [stage, setStage] = useState<'idle' | 'processing' | 'complete'>('idle')
      
      const messages = {
        processing: 'AI is thinking...',
        complete: 'Done!'
      }
      
      return (
        <View>
          {stage === 'processing' && (
            <ActivityIndicator size="small" />
          )}
          <Text>{messages[stage]}</Text>
        </View>
      )
    }

    Optimistic Updates

    const sendMessage = async (content: string) => {
      // Add message immediately (optimistic)
      const optimisticMessage = { role: 'user', content }
      setMessages(prev => [...prev, optimisticMessage])
    
      try {
        const response = await callAI(content)
        // Replace with real response
        setMessages(prev => [...prev, response])
      } catch (error) {
        // Remove optimistic message on error
        setMessages(prev => prev.slice(0, -1))
      }
    }

    Caching Strategies

    import AsyncStorage from '@react-native-async-storage/async-storage'
    
    const cacheResponse = async (prompt: string, response: string) => {
      const key = ai_cache_${btoa(prompt)}
      await AsyncStorage.setItem(key, response)
    }
    
    const getCachedResponse = async (prompt: string) => {
      const key = ai_cache_${btoa(prompt)}
      return await AsyncStorage.getItem(key)
    }

    Real Examples from My Apps

    YapperX: Voice Memo AI

    Problem: Voice memos are hard to search and organize

    Solution: Whisper transcription + GPT-4 summarization

    Key features:

    • Record → Transcribe → Summarize in one flow
    • Smart categorization with AI
    • Search through transcribed content

    BeatAI: AI Music Practice Coach

    Problem: Musicians struggle to stay accountable with practice

    Solution: AI-powered practice companion that helps you improve

    Key features:

    • AI-generated practice exercises
    • Smart practice tracking
    • Accountability features for daily practice

    VidNotes: AI Video Summarization

    Problem: Long videos, no time to watch

    Solution: Extract audio → Whisper → GPT-4 summary

    Key features:

    • Upload video, get instant summary
    • Timestamps for key moments
    • Export summaries as notes

    Common Mistakes (And How I Fixed Them)

    1. Exposing API Keys

    Mistake: Put OpenAI key directly in React Native code

    Fix: Use Convex functions as secure proxy

    2. No Usage Limits

    Mistake: Let users spam AI features

    Fix: Credit system + rate limiting

    3. Poor Error Handling

    Mistake: App crashes when AI API is down

    Fix: Graceful fallbacks + retry logic

    4. Ignoring Latency

    Mistake: No loading states for AI calls

    Fix: Optimistic updates + progress indicators

    Next Steps

    If you want to add AI to your React Native app:

  • Start simple: Add basic GPT chat first
  • Use edge functions: Keep API keys secure
  • Implement credits: Monetize from day one
  • Focus on UX: AI is only as good as the experience
  • The vibecoding era is here. Mobile apps that integrate AI well will dominate the next 2 years.

    Want the exact setup I use? Ship React Native includes all the edge functions, React Native code, and database schemas from this guide. No copy-pasting code from blog posts.

    Get Ship React Native and start shipping AI features today.


    This post was written in February 2026 by Paweł Karniej, who has shipped 10+ React Native apps with AI integrations. Follow @thepawelk for more real-world mobile dev insights.