Roadmap
#
UpcomingThese are features that are to be released soon.
#
June 2023: Cloud Scheduler (Preview)Schedule jobs to build and test data with unparalleled operational insight
Deep Channel is launching a fully managed scheduler for building and testing production data models. It seamlessly plugs into your workflow and lets you see the health of your data stack like never before:

The preview will be open to a select group of companies and packs dozens of game-changing features, including:
Model-level telemetry means building models can be more than just piles of logs. Scheduled builds and tests are viewed at the granularity of the model, so you spend less time sifting through lines of logs to identify errors. Immediately see which models are failing to build or have failing tests. Drill into a model to see how it has changed or performed over time.
The advanced scheduler intelligently detects models that have changed or whose dependent data has changed when deciding which models to run. Builds happen faster and with less wasted compute spend.
Pull request collaboration visualizes the team's changes to code and the underlying data in each pull request, so you can integrate changes faster and more reliably.
Secure connection storage securely stores credentials and decrypts them at design-time. Sensitive connection information is no longer stored on disk.
#
On Deck: Q2 2023These are features that are under development, have been chosen for development, or have a proof of concept underway.
#
Next-gen Model ParsingToday, Deep Channel does an incredible job of parsing models to provide linting and autocomplete suggestions. However, models with complex Jinja result in partial understanding of models.
Using our new rapid compilation technology introduced in 3.12 (April 2023), Deep Channel will instantly compile complex models and parse the resulting SQL, giving a complete picture of the model's structure in milliseconds.
#
AI AssistanceData tooling (and the world at large) is quickly becoming bloated with hacky AI-centric products that forgo thoughtful and complete functionality. While AI features can enhance developer workflows, we must remember that these features serve to augment well-designed foundations, not replace them.
Deep Channel is uniquely positioned to offer AI assistance to developers, given it is becoming the de facto development environment for data professionals and because our parser offers a structured understanding of data projects to an unparalleled degree. There are an unlimited set of possibilities, but three features have emerged as major quality of life enhancements that are in the realm of possibility right now:
Autocomplete Suggestions - A "copilot" experience offering adapter-specific suggestions based on the contents of the model.
Example: when chaining a CTE named flattened
in a Snowflake model, Deep Channel could suggest lateral flatten
syntax to flatten a VARIANT
column from the CTE above it.
Code generation - Deep Channel already generates source
declarations and base models. A more robust version would be to analyze the contents of the source and offer suggestions to the source
declaration (column names, descriptions) and to offer suggestions for basic cleaning on the base model based on the contents of the table.
Basic Example: generating a base model from a table whose columns contain whitespace might wrap the column in trim
, nullable columns wrapped in to_varchar
, column names might be automatically aliased to snake_case
, and VARIANT
columns might be flattened.
Advanced Example: In the future, using the contents of the project, Deep Channel could suggest downstream fact and dimension tables from these staging models with common business logic or usage patterns.
Advanced Problem Detection - Today, Deep Channel is the only technology that provides design-time syntactic and semantic analysis of models. But every project has problems lurking beneath the static analysis we perform. Given a large enough corpus of training data (projects and common mistakes), we can easily detect and report problems that require human judgement today.
Example: a model that joins two CTEs on an incorrect column is syntactically correct, but the resulting data is useless. A problem like this is severe but can only be caught by a human reading the code and recognizing the error. Analyzing the contents of each column can provide a statistical analysis of the likelihood that a mistake is being made.
#
Under consideration: Q3/Q4 2023#
DocumentationData documentation remains an unsolved problem and has created an entire ecosystem of third-party tooling that is disconnected from the developer workflow and expensive to maintain.
By integrating the documentation process directly into the data professional's workflow, Deep Channel will become a knowledge hub for the entire company, not just data professionals.
#
VirtualizationDeep Channel has a detailed picture of the data warehouse and every data model's contents. Deep Channel can easily compute the logical difference between the two and output a diff as a visualization or a command to synchronize the two.
This allows developers to quickly synchronize their database with a project after pulling latest or switching branches.
#
Better YAML InterfacesYAML is a mainstay in projects, and its presence is only growing. While YAML is efficient for configuration, it quickly becomes unwieldy and is difficult for experts and novices alike.
Deep Channel should visualize YAML files in such a way that developers can intuitively view and edit the values. Deep Channel should update YAML files as referenced values (columns, tests) change in the development workflow.