guide

Migrate Your Drupal Site to GitHub

Drupal teams know Git. Most professional Drupal development uses Git for version control — feature branches, pull requests, deployment tags. The Drupal community adopted Git workflows years ago when Drupal.org itself migrated from CVS.

But here is the disconnect: Git only manages part of a Drupal site. The codebase lives in Git. The content lives in a database. The configuration lives partially in YAML exports and partially in the database. File uploads live on a server filesystem. A Drupal Git repository is not the site — it is a piece of the site. Checking out the repo on a fresh machine gives you nothing usable without a database import, a file sync, and a working PHP environment.

This guide is about making Git the single source of truth for your entire website. Not “Git for the Drupal codebase” but “Git for everything” — content, design, configuration, and assets, all in one repository. When the repo is the site, deployment is git push, collaboration is pull requests, and history is git log.

For Drupal teams accustomed to the complexities of config management, database-dependent deployments, and Drush orchestration, this is a genuine simplification — but one that comes with tradeoffs worth understanding.

How Drupal uses Git today (and where it falls short)

A typical Drupal development workflow looks like this:

  1. Developer creates a feature branch
  2. Makes code changes (custom module, theme, config)
  3. Exports config: drush config:export
  4. Commits code + YAML config files
  5. Opens a pull request
  6. After merge, deploys to staging
  7. Runs drush updatedb for database updates
  8. Runs drush config:import to apply config changes
  9. Runs drush cache:rebuild to clear caches
  10. Tests and then repeats for production

This works. Drupal teams have refined this workflow over years. But there are fundamental limitations:

Content is not in Git. When an editor publishes a blog post, it goes into the database. There is no commit, no history, no diff. If someone accidentally deletes content, you restore from a database backup — not git revert. Content changes are invisible to the development workflow.

Config sync is fragile. Drupal 8+ introduced config management, and it was a huge improvement over D7. But config conflicts between environments are a notorious pain point. Two developers changing different aspects of the same View, or a content editor changing a field setting in production while a developer is modifying it in staging — these create merge conflicts that are resolved in YAML files, not code.

The database is a hidden dependency. Clone the Git repo, install dependencies, but the site still does not work without a database. You need a sanitized database dump, which is a separate artifact that lives outside of Git. This means your repo does not contain your site — it contains the code that, combined with a specific database state, produces your site.

Deployment is multi-step. Pushing code is never enough. You need updatedb, config:import, cache:rebuild, and sometimes manual steps (reverting features, running migrations, rebuilding permissions). CI/CD pipelines automate this, but the complexity is still there.

Drush is the glue. Without Drush (or its equivalents), Drupal deployments fall apart. This is a command-line tool specific to Drupal that manages the gap between “code in Git” and “working site.” It is powerful but represents institutional knowledge that new team members must learn.

What a Git-native workflow looks like

When your entire site is in a GitHub repository:

my-company-site/
  src/
    content/
      blog/
        redesigning-our-approach.md    # Content — committed, versioned, diffable
        quarterly-update-q4.md
      pages/
        about.md
        services.md
    pages/
      index.astro
      contact.astro
    components/
      Header.astro
      Footer.astro
      BlogCard.astro
    layouts/
      Base.astro
      BlogPost.astro
  public/
    images/
      team-photo.jpg                   # Media — committed, versioned
      logo.svg
  astro.config.mjs
  package.json

Content is in Git. Blog posts are markdown files. Editing content is editing a file. The full history of every content change is in git log. Reverting a bad edit is git revert. Diffing what changed last month is git diff.

Configuration is code. Site configuration — navigation structure, collection schemas, build settings — is in committed files. There is no config export/import cycle. What is in the repo is what the site does.

Deployment is one step. Push to main and the site builds and deploys automatically via Cloudflare Pages, Vercel, or Netlify. No database migrations, no cache clears, no Drush commands.

The repo IS the site. Clone it, run npm install && npm run dev, and you have a working local environment in seconds. No database dumps, no file syncs, no PHP configuration.

How Git replaces Drupal’s systems

Drupal teams rely on several systems that a Git-based workflow replaces entirely:

Config management becomes committed code

In Drupal, config management (drush config:export/import) syncs YAML files that represent database-stored configuration. It is a system for moving config between environments.

In a Git-based site, configuration is just code. There is nothing to export or import. Navigation structure is a data file or component. Content schemas are TypeScript definitions. Build configuration is a config file. All committed, all versioned.

Database backups become Git history

Drupal sites need regular database backups because the database holds content, config, and state. Losing the database means losing the site.

In a Git-based site, git log is your backup history. The entire site — including all content — is recoverable from any commit. GitHub itself provides redundancy. You can also clone the repo to any machine for a full backup.

Drush becomes unnecessary

Drush commands in a typical Drupal workflow:

  • drush config:export / drush config:import — not needed (config is code)
  • drush updatedb — not needed (no database)
  • drush cache:rebuild — not needed (no runtime cache)
  • drush sql-dump / drush sql-cli — not needed (no database)
  • drush pm:enable / drush pm:uninstall — replaced by npm install

This is not a criticism of Drush — it is an excellent tool for managing Drupal. But it exists to bridge the gap between Git and Drupal’s database-dependent architecture. When the database is removed from the equation, the bridge is no longer necessary.

Environments become branches

In Drupal, managing multiple environments (local, dev, staging, production) requires database syncing, config management, and environment-specific settings. The .env file, settings.local.php, and Drush aliases all exist to manage this complexity.

In a Git-based site, environments are branches with deployment targets:

  • main deploys to production
  • staging deploys to a staging URL
  • Pull requests get deploy previews automatically

No database to sync. No config to import. The branch IS the environment.

How to migrate: every approach available

1. AI coding agents (Claude Code, Cursor, Windsurf, Cline)

The most powerful approach for teams comfortable with developer tooling. AI agents can consume Drupal’s APIs and generate an entire Git-based project.

For Drupal 8+ sites with JSON:API:

Drupal 8+ ships with JSON:API as a core module — one of the best content APIs in the CMS world. An AI agent can consume it directly:

  1. Confirm JSON:API is accessible: https://your-site.com/jsonapi
  2. Open Claude Code, Cursor, Windsurf, or Cline
  3. Prompt: “Fetch all content from my Drupal site at example.com using JSON:API. For each content type, create a markdown file with frontmatter matching the fields. Download all media. Scaffold a static site project. Initialize a Git repo and make a clean initial commit.”
  4. The agent iterates through API endpoints, builds files, downloads media
  5. Push the result to GitHub

For Drupal 7 sites (database export):

D7 does not have JSON:API. Export via SQL queries against the field_data_* tables:

SELECT n.nid, n.title, n.created, n.changed,
       b.body_value, b.body_summary,
       ua.alias as url_alias
FROM node n
JOIN field_data_body b ON b.entity_id = n.nid AND b.entity_type = 'node'
LEFT JOIN url_alias ua ON ua.source = CONCAT('node/', n.nid)
WHERE n.type = 'article' AND n.status = 1;

Export to CSV or JSON, then feed to the AI agent for conversion.

By crawling (any version):

If API and database access are unavailable, the agent can crawl the published site:

“Crawl my website at example.com. Extract all page content, navigation structure, images, and links. Rebuild as a static site in a new Git repository.”

2. AI app builders (Bolt.new, v0.dev, Lovable, Replit Agent)

For teams without terminal access or development environments:

  1. Export Drupal content to JSON/CSV (via JSON:API, Views Data Export, or a database dump)
  2. Upload to Bolt.new, v0.dev, Lovable, or Replit Agent
  3. Describe the desired site with reference to the exported content
  4. The builder generates a project you can push to GitHub
  5. Connect the GitHub repo to a deployment platform

This works well for simpler sites. For complex Drupal sites with many content types, a coding agent provides more control.

3. Hire a Drupal migration specialist

The Drupal ecosystem has established migration expertise:

  • Freelance developers: $2,000-$10,000 for typical migrations. They understand Drupal’s entity system, field storage, and the nuances of content extraction from D7 vs D8+.
  • Drupal agencies: $10,000-$50,000 for enterprise sites with complex content models, multilingual content, Paragraphs-heavy pages, or custom modules.
  • Where to find them: Drupal.org marketplace, Drupal Slack, Toptal, Upwork (search “Drupal migration”).

A specialist is worth considering if your Drupal site has complex entity reference chains, the Paragraphs module for nested content, multilingual content with translation management, or custom modules with business logic that needs to be replicated.

4. BrowserCat Migrate (automated)

BrowserCat Migrate crawls your Drupal site, extracts content and structure, and delivers a GitHub repo with a working static site project and live preview.

5. Drupal-native export then manual build

Drupal’s own tools are excellent for content extraction:

  • JSON:API (D8+ core): RESTful API for all entity types with relationships, filtering, and pagination
  • Views Data Export (contrib): Export any Drupal View as CSV, JSON, or XML
  • Migrate module (D8+ core): Drupal’s migration framework can export to custom destinations
  • Default Content module (contrib): Export entities as YAML files
  • Drush sql-dump: Full database export for D7 migrations
  • Direct SQL: Query field_data_* tables (D7) or entity tables (D8+) directly

Once content is exported, initialize a Git repo, scaffold your static site project, convert content to markdown, and push to GitHub.

6. Manual DIY

  1. Create a new GitHub repo
  2. Audit your Drupal content types, fields, taxonomies, Views, and blocks
  3. Choose an export method and extract all content
  4. Scaffold a static site project (Astro, Hugo, 11ty)
  5. Convert exported content to markdown with frontmatter
  6. Download files from sites/default/files/
  7. Build layouts and components
  8. Set up deployment (connect GitHub to Cloudflare Pages, Vercel, or Netlify)
  9. Configure redirects for any URL changes
  10. Commit and push

Setting up the Git-based editing workflow

Once the migration is complete, your team needs a way to edit content. There are several approaches, from developer-focused to editor-friendly:

Direct file editing

For technical teams, edit markdown files directly in VS Code, commit, and push. This is the simplest workflow but requires Git proficiency.

GitHub web editor

For quick edits, GitHub’s web interface lets you edit markdown files, preview changes, and commit directly — no local development environment needed. Navigate to the file, click the pencil icon, edit, and commit.

Headless CMS layer

For non-technical editors who need a familiar CMS interface:

  • Decap CMS (formerly Netlify CMS): Free, open source, Git-backed. Provides a browser-based editor that commits to your repo. Content stays in Git.
  • Tina CMS: Visual editing with Git backend. Real-time preview of content changes.
  • Sanity, Contentful, or Storyblok: Cloud-based headless CMSes. Content lives in their service (not Git), but integrates with your static site at build time.

For Drupal teams used to a browser-based editing experience, Decap CMS or Tina CMS provide the most familiar transition while keeping content in Git.

The Drupal deployment workflow vs the Git workflow

Side-by-side comparison for a common task: editing a blog post.

Drupal workflow:

  1. Log in to Drupal admin at /admin
  2. Navigate to Content, find the node, click Edit
  3. Make changes in the WYSIWYG editor
  4. Click Save
  5. Content is immediately live (or goes through a moderation workflow)
  6. Change is in the database — no commit, no history beyond Drupal’s revision system

Git workflow (direct editing):

  1. Open src/content/blog/my-post.md in your editor
  2. Edit the markdown
  3. git add . && git commit -m "Update blog post title" && git push
  4. Site rebuilds and deploys in 30-60 seconds
  5. Full history in Git — who changed what, when, and why

Git workflow (with Decap CMS):

  1. Navigate to your site’s /admin (Decap CMS interface)
  2. Find the blog post, click Edit
  3. Make changes in the editor
  4. Click Publish
  5. Decap commits to GitHub automatically
  6. Site rebuilds and deploys
  7. Full history in Git

The trade-off is clear: Drupal’s editing experience is more polished out of the box (inline editing, media library, drag-and-drop). The Git-based approach is simpler in infrastructure but requires either Git proficiency or an additional CMS layer for non-technical editors.

The Drupal 7 situation

Drupal 7 reached end of life in January 2025. For D7 teams, the migration to a Git-based workflow is especially compelling because the alternative — upgrading to Drupal 10 — is a major undertaking:

  • Custom modules need rewriting for D10’s architecture (D7 hooks to D10 plugins/services)
  • Themes need conversion from PHPTemplate to Twig
  • Database schemas change significantly between D7 and D10
  • Contributed modules may not have D10 equivalents

Organizations typically report 3-6 months and $50,000-$200,000 for a D7 to D10 upgrade. Moving to a Git-based static site can be done in days to weeks for a fraction of that cost.

However, the D10 upgrade is the right choice if your site genuinely needs Drupal’s capabilities — complex permissions, editorial workflows, multilingual management, entity relationships driving application logic. Drupal 10 is excellent software, and the upgrade path exists for a reason.

When a Git workflow is not enough

Be honest about what you lose:

  • No real-time content editing for non-technical users without adding a headless CMS layer
  • No granular permissions. In Drupal, you can set “editors can edit articles but not pages.” In Git, access control is at the repository level.
  • No editorial workflows. Drupal’s content moderation (draft, review, published) does not have a direct equivalent. You can approximate it with Git branches and pull request reviews, but it is less polished.
  • No built-in media management. Drupal’s media library is a full-featured asset manager. In a Git repo, images are files in a directory.
  • Build times. Drupal serves content instantly (from cache). A Git-based site needs to rebuild after changes — typically 30-60 seconds, but longer for very large sites.

For many content sites, these limitations are acceptable. For sites that need robust editorial workflows, the Drupal-to-Git migration might not be the right move.

The cost and maintenance comparison

AspectDrupalGit-based static site
Hosting$100-$500/mo (managed)$0 (Cloudflare Pages, Vercel, Netlify)
Annual maintenance$3,600-$24,000Near-zero
Database backupsRequired (daily)Not needed (Git IS the backup)
Security patchesConstant (core + contrib)No server-side surface
Developer rates$50-$200/hr (Drupal specialists)Any developer + AI agents
Deployment complexityMulti-step (Drush orchestration)git push
Content historyDrupal revisions (database)git log (permanent)
Environment syncDatabase dumps + config importBranch checkout
Disaster recoveryRestore database + files from backupgit clone

After the migration

Your Drupal site, as a GitHub repo, is:

  • Complete. The repo contains everything. Clone it and you have the entire site.
  • Portable. Deploy to any static host — Cloudflare, Vercel, Netlify, AWS S3, or your own server.
  • Maintainable by anyone. No Drupal expertise required. Any developer who knows HTML, CSS, and JavaScript can contribute. AI coding agents can read and modify every file.
  • Backed up by default. Every developer who clones the repo has a full backup. GitHub itself provides redundancy. No more database backup anxiety.
  • Diffable. Every change — code and content — has a diff. Code review applies to content changes. git blame shows who changed what and when.

The transition from Drupal’s database-centric model to Git’s file-centric model is a fundamental architectural shift. For content sites, it is almost always a simplification. For application-like sites that need Drupal’s dynamic capabilities, it is the wrong trade.

Automate Everything.

Tired of managing a fleet of fickle browsers? Sick of skipping e2e tests and paying the piper later?

Sign up now for free access to our headless browser fleet…

Get started today!