Testing Strategies
Electron apps have unique testing challenges: multiple processes, IPC communication, and platform-specific behavior. Here's how to build confidence in your code.
The testing pyramid applies to Electron: lots of fast unit tests, some integration tests for IPC boundaries, and a few E2E tests for critical user flows. The twist is that you're testing across process boundaries and multiple operating systems.
The Electron Testing Pyramid
Different test types serve different purposes. Unit tests catch logic bugs fast. Integration tests verify IPC contracts. E2E tests ensure the whole app works together.
Fast
Milliseconds
Medium
Seconds
Slow
Minutes
Unit Tests
Test individual functions and classes in isolation. Mock dependencies.
~80% of your tests. Run in milliseconds.
Integration Tests
Test IPC handlers, service interactions, database operations.
~15% of your tests. Verify boundaries work.
E2E Tests
Test complete user flows in a real Electron app.
~5% of your tests. Critical paths only.
Unit Testing: Fast Feedback
Unit tests run in Node.js without Electron. They're fast, reliable, and catch most bugs. The key is keeping your business logic separate from Electron APIs so it's testable.
🎯 VibeBlaster Testing Strategy
VibeBlaster has ~300 unit tests that run in 4 seconds. They test the scheduling logic, content validation, API response parsing—all the business logic that doesn't need Electron. These catch 90% of bugs before I even run the app.
The trick: keep business logic in pure functions that take inputs and return outputs. No side effects, no Electron imports.
Testing Pure Business Logic
Extract logic into pure functions. They're trivial to test—no mocking needed.
// src/utils/scheduler.ts - Pure function, no Electron
export function calculateNextPostTime(
posts: Post[],
timezone: string
): Date {
// Business logic here
}
// src/utils/__tests__/scheduler.test.ts
describe('calculateNextPostTime', () => {
it('should schedule next post 4 hours after last', () => {
const posts = [{ scheduledAt: new Date('2024-01-01T10:00:00') }];
const result = calculateNextPostTime(posts, 'America/New_York');
expect(result.getHours()).toBe(14);
});
it('should skip weekends if configured', () => {
// Test edge cases...
});
});Mocking Electron APIs
When you must test code that uses Electron, mock the APIs. Create a mock file that Jest uses instead of the real Electron module.
// __mocks__/electron.ts
export const app = {
getPath: jest.fn((name) => `/mock/${name}`),
getVersion: jest.fn(() => '1.0.0'),
};
export const ipcMain = {
handle: jest.fn(),
on: jest.fn(),
};
// In your test
jest.mock('electron');
import { app } from 'electron';
test('uses correct data path', () => {
const path = getDataPath();
expect(app.getPath).toHaveBeenCalledWith('userData');
});Integration Testing: IPC Boundaries
Integration tests verify that your IPC handlers work correctly. They test the contract between main and renderer processes without spinning up a full Electron app.
What to Test
- • IPC handlers return correct data
- • Input validation rejects bad data
- • Error handling works properly
- • Database operations succeed
- • File operations are secure
How to Test
- • Extract handler logic into testable functions
- • Mock file system with memfs or temp dirs
- • Use in-memory SQLite for database tests
- • Mock external APIs with nock or msw
Testing IPC Handlers
// Extract handler logic for testing
export async function handleFileSave(
filename: string,
content: string,
basePath: string
) {
// Validation
if (!filename || typeof filename !== 'string') {
throw new Error('Invalid filename');
}
// Path security
const safePath = path.join(basePath, path.basename(filename));
await fs.writeFile(safePath, content);
return { success: true, path: safePath };
}
// Test
describe('handleFileSave', () => {
const tempDir = '/tmp/test-' + Date.now();
beforeEach(() => fs.mkdirSync(tempDir, { recursive: true }));
afterEach(() => fs.rmSync(tempDir, { recursive: true }));
it('saves file to correct location', async () => {
const result = await handleFileSave('test.txt', 'content', tempDir);
expect(result.success).toBe(true);
expect(fs.existsSync(result.path)).toBe(true);
});
it('rejects path traversal attempts', async () => {
await expect(
handleFileSave('../../../etc/passwd', 'hack', tempDir)
).rejects.toThrow();
});
});E2E Testing: The Full Picture
E2E tests launch your actual Electron app and interact with it like a user would. They're slow but catch integration issues that unit tests miss. Use them for critical user flows.
Playwright for Electron
Playwright has first-class Electron support. It can launch your app, interact with UI elements, and even access the main process for testing IPC.
// e2e/app.spec.ts
import { test, expect, _electron as electron } from '@playwright/test';
test('user can create and save a document', async () => {
// Launch Electron app
const app = await electron.launch({ args: ['./dist/main.js'] });
const page = await app.firstWindow();
// Interact with UI
await page.click('[data-testid="new-document"]');
await page.fill('[data-testid="editor"]', 'Hello, World!');
await page.click('[data-testid="save"]');
// Verify result
await expect(page.locator('[data-testid="status"]'))
.toHaveText('Saved');
await app.close();
});âś“ Good E2E Test Candidates
- • User signup/login flow
- • Core feature happy path
- • Critical business workflows
- • Settings persistence
âś— Skip E2E For
- • Edge cases (use unit tests)
- • Validation logic (use unit tests)
- • Every possible user path
- • Styling/visual details
⚠️ E2E Tests Are Slow
Each E2E test launches a full Electron app—expect 5-30 seconds per test. Keep E2E tests focused on critical paths. If you have 100 E2E tests, your CI will take forever. Aim for 10-20 E2E tests covering the most important flows.
CI/CD: Automated Testing
Run tests automatically on every push. The key with Electron is testing on all target platforms—what works on macOS might break on Windows.
GitHub Actions Matrix
Use a matrix strategy to run tests on macOS, Windows, and Linux in parallel.
# .github/workflows/test.yml
name: Test
on: [push, pull_request]
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npm test
- run: npm run build
# E2E tests need display on Linux
- name: E2E Tests
run: npm run test:e2e
env:
DISPLAY: ':99'
- uses: actions/upload-artifact@v4
if: failure()
with:
name: test-results-${{ matrix.os }}
path: test-results/Speed Tips
- • Cache node_modules between runs
- • Run unit tests first (fail fast)
- • Parallelize E2E tests across machines
- • Skip E2E on draft PRs
Coverage Goals
- • 80%+ line coverage target
- • 100% on critical business logic
- • Don't chase 100% overall—diminishing returns
- • Focus on code that changes often
Testing Best Practices
Tests Give You Confidence to Ship
With a solid test suite, you can refactor fearlessly, ship updates confidently, and catch regressions before users do. The investment pays off every time you don't ship a bug.
Next up: Packaging & Distribution. Your tests pass, your code works—now let's turn it into installers users can actually download and run.