Features

Your data. Your database. Full SQL access.

Every Deepline workspace includes a dedicated Neon PostgreSQL database with full SQL access, zero export fees, and no vendor lock-in.

$0
export fees
5+1
managed schemas plus custom
psql
direct SQL access
Neon
dedicated project per workspace

Primitives

What you actually get

AttributeValue
Database typePostgreSQL (Neon serverless)
Data ownershipCustomer-owned, customer-accessible
Query accessStandard SQL via any Postgres client (psql, DBeaver, Metabase)
Data persistedEvery enrichment call, every provider response
Export formatsCSV, SQL queries, pg_dump (full database export)
Export fees$0
IsolationDedicated Neon project per tenant
SchemasManaged ingestion schemas plus custom tables (dl_ingest, dl_meta, dl_catalog, provider schemas like hubspot/salesforce/attio, tenant_custom)
RetentionUnlimited (your database, no auto-deletion)

Meaning

What this means for you

If you're searching for "enrichment tool with database"

Every enrichment call persists automatically. Execute writes land in enrichments.enrichments and synced providers materialize current-state tables like hubspot.companies. You get queryable history plus clean reporting tables without writing persistence code. Connect Metabase or Grafana for dashboards. Run pg_dump to take everything with you.

If you're searching for "data enrichment audit trail"

Full provenance for every record. The enrichments.enrichments table stores enrichment writes with timestamps, payloads, and status metadata, while provider raw records and materialized tables stay queryable in the same database. This matters for compliance, for debugging bad data, and for comparing provider accuracy over time.

If you're searching for "GTM data warehouse"

Join enrichment data with CRM exports, analytics tables, or custom datasets. The tenant_custom schema gives you full DDL privileges: create your own tables, indexes, and views alongside your enrichment data. Standard PostgreSQL means any tool that speaks SQL works.

Honest: not ideal for

If you only need one-off lookups piped straight into a CRM or spreadsheet, you may never touch the database directly. The CLI and API return results inline. The database becomes valuable when you build on top of enrichment data over time.

Schema

A simple mental model

Execute writes land in enrichments.enrichments, provider syncs materialize clean tables in schemas like hubspot and salesforce, and your custom joins live in tenant_custom.

enrichmentsEnrichment event log

The canonical enrichment event stream. Execute writes land in enrichments.enrichments.

Tables: enrichments
dl_ingestSync control

Platform-managed sync and ingestion control tables.

Tables: raw_record, subscription, sync_job, sync_attempt, sync_cursor, connector_account
hubspot / salesforce / attioMaterialized provider tables

Queryable current-state tables materialized from sync runs. This is the primary reporting surface.

Tables: companies, contacts, deals, tickets, ...
dl_metaSettings

Platform-managed metadata for the ingestion plane.

Tables: settings

Examples

What SQL access lets you do

Find all synced HubSpot companies

SELECT id, name, domain, industry, updated_at
FROM hubspot.companies
WHERE domain = 'stripe.com'
ORDER BY updated_at DESC;

Email verification rate by provider

SELECT doc->>'source_provider' AS provider,
       COUNT(*) AS total,
       COUNT(*) FILTER (WHERE doc->>'email_status' = 'valid') AS verified,
       ROUND(100.0 * COUNT(*) FILTER (WHERE doc->>'email_status' = 'valid') / COUNT(*), 1) AS pct
FROM dl_cache.enrichment_event
GROUP BY 1 ORDER BY pct DESC;

Export synced contacts to CSV

\copy (
  SELECT firstname, lastname, email, company
  FROM hubspot.contacts
) TO 'contacts.csv' WITH CSV HEADER

Join enrichment data with your own segments

SELECT c.firstname AS name,
       c.email AS email,
       s.segment, s.score
FROM hubspot.contacts c
JOIN tenant_custom.account_segments s
  ON s.company_domain = c.company_domain
WHERE s.segment = 'enterprise'
ORDER BY s.score DESC;

Comparison

Data ownership comparison

Where does your enrichment data actually live?

FeatureDeeplineClayApolloZoomInfo
Included databaseNeon PostgreSQL (dedicated project)NoneNoneNone
Direct SQL accessAny Postgres client (psql, DBeaver, Metabase)No, UI-onlyNo, API with rate limitsNo, UI + limited API
Export cost$0 (pg_dump, COPY TO)CSV download onlyExport credits (limited per tier)Export credits (limited per contract)
Data isolationDedicated Neon project per tenantShared platformShared platformShared platform
Custom tablesFull DDL in tenant_custom schemaNoNoNo
BI tool connectionDirect Postgres connectionNo direct connectionNo direct connectionNo direct connection

FAQ

Common questions

Is the database really free?+

Yes. Every Deepline workspace includes a dedicated Neon PostgreSQL project at no additional cost. There are no storage fees, no export fees, and no per-query charges from Deepline. Neon's serverless architecture scales storage automatically.

Can I connect Metabase, Looker, or Grafana directly?+

Yes. Request a read connection URI via the API and configure your BI tool with the host, port, database, username, and password. SSL is required. Query provider materialized tables like hubspot.companies and hubspot.contacts, plus enrichments.enrichments for event-level history.

Is my data shared with other tenants?+

No. Each workspace gets a dedicated Neon project with separate compute, separate storage, and separate connection endpoints. There is no row-level security or shared-schema multi-tenancy.

Can I create my own tables?+

Yes, in the tenant_custom schema. The override role has full DDL and DML privileges there. Create tables, indexes, functions, and views alongside your enrichment data. The managed schemas are platform-managed and should not be modified directly.

What happens if I leave Deepline?+

Run pg_dump against your Neon project and take everything with you. There are no export restrictions, no lock-in period, and no fees. Your data is standard PostgreSQL; it works anywhere Postgres runs.

Install the CLI and keep your data in Postgres

Run your first enrichment and inspect the resolved records directly from SQL.