Skip to content

Commit 138f218

Browse files
authored
Update 01-ddl-create_task.md (#2734)
1 parent ae4ada4 commit 138f218

File tree

1 file changed

+18
-16
lines changed

1 file changed

+18
-16
lines changed

docs/en/sql-reference/10-sql-commands/00-ddl/04-task/01-ddl-create_task.md

Lines changed: 18 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ CREATE TASK my_daily_task
113113
WAREHOUSE = 'compute_wh'
114114
SCHEDULE = USING CRON '0 0 9 * * *' 'America/Los_Angeles'
115115
COMMENT = 'Daily summary task'
116-
AS
116+
AS
117117
INSERT INTO summary_table SELECT * FROM source_table;
118118
```
119119

@@ -127,7 +127,7 @@ CREATE TASK IF NOT EXISTS mytask
127127
SCHEDULE = 2 MINUTE
128128
SUSPEND_TASK_AFTER_NUM_FAILURES = 3
129129
AS
130-
INSERT INTO compaction_test.test VALUES((1));
130+
INSERT INTO compaction_test.test VALUES((1));
131131
```
132132

133133
This example creates a task named `mytask`, if it doesn't already exist. The task is assigned to the **system** warehouse and is scheduled to run **every 2 minutes**. It will be **automatically suspended** if it **fails three times consecutively**. The task performs an INSERT operation into the compaction_test.test table.
@@ -139,9 +139,9 @@ CREATE TASK IF NOT EXISTS daily_sales_summary
139139
WAREHOUSE = 'analytics'
140140
SCHEDULE = 30 SECOND
141141
AS
142-
SELECT sales_date, SUM(amount) AS daily_total
143-
FROM sales_data
144-
GROUP BY sales_date;
142+
SELECT sales_date, SUM(amount) AS daily_total
143+
FROM sales_data
144+
GROUP BY sales_date;
145145
```
146146

147147
In this example, a task named `daily_sales_summary` is created with **second-level scheduling**. It is scheduled to run **every 30 SECOND**. The task uses the **analytics** warehouse and calculates the daily sales summary by aggregating data from the sales_data table.
@@ -152,22 +152,24 @@ In this example, a task named `daily_sales_summary` is created with **second-lev
152152
CREATE TASK IF NOT EXISTS process_orders
153153
WAREHOUSE = 'etl'
154154
AFTER task1, task2
155-
ASINSERT INTO data_warehouse.orders
156-
SELECT * FROM staging.orders;
155+
AS
156+
INSERT INTO data_warehouse.orders SELECT * FROM staging.orders;
157157
```
158158

159159
In this example, a task named `process_orders` is created, and it is defined to run **after the successful completion** of **task1** and **task2**. This is useful for creating **dependencies** in a **Directed Acyclic Graph (DAG)** of tasks. The task uses the **etl** warehouse and transfers data from the staging area to the data warehouse.
160160

161+
> Tip: Using the AFLTER parameter does not require setting the SCHEDULE parameter.
162+
161163
### Conditional Execution
162164

163165
```sql
164166
CREATE TASK IF NOT EXISTS hourly_data_cleanup
165167
WAREHOUSE = 'maintenance'
166-
SCHEDULE = '0 0 * * * *'
168+
SCHEDULE = USING CRON '0 0 9 * * *' 'America/Los_Angeles'
167169
WHEN STREAM_STATUS('db1.change_stream') = TRUE
168170
AS
169-
DELETE FROM archived_data
170-
WHERE archived_date < DATEADD(HOUR, -24, CURRENT_TIMESTAMP());
171+
DELETE FROM archived_data
172+
WHERE archived_date < DATEADD(HOUR, -24, CURRENT_TIMESTAMP());
171173

172174
```
173175

@@ -181,12 +183,12 @@ CREATE TASK IF NOT EXISTS mytask
181183
SCHEDULE = 30 SECOND
182184
ERROR_INTEGRATION = 'myerror'
183185
AS
184-
BEGIN
186+
BEGIN
185187
BEGIN;
186188
INSERT INTO mytable(ts) VALUES(CURRENT_TIMESTAMP);
187189
DELETE FROM mytable WHERE ts < DATEADD(MINUTE, -5, CURRENT_TIMESTAMP());
188190
COMMIT;
189-
END;
191+
END;
190192
```
191193

192194
In this example, a task named `mytask` is created. It uses the **mywh** warehouse and is scheduled to run **every 30 seconds**. The task executes a **BEGIN block** that contains an INSERT statement and a DELETE statement. The task commits the transaction after both statements are executed. When the task fails, it will trigger the **error integration** named **myerror**.
@@ -201,10 +203,10 @@ CREATE TASK IF NOT EXISTS cache_enabled_task
201203
enable_query_result_cache = 1,
202204
query_result_cache_min_execute_secs = 5
203205
AS
204-
SELECT SUM(amount) AS total_sales
205-
FROM sales_data
206-
WHERE transaction_date >= DATEADD(DAY, -7, CURRENT_DATE())
207-
GROUP BY product_category;
206+
SELECT SUM(amount) AS total_sales
207+
FROM sales_data
208+
WHERE transaction_date >= DATEADD(DAY, -7, CURRENT_DATE())
209+
GROUP BY product_category;
208210
```
209211

210212
In this example, a task named `cache_enabled_task` is created with **session parameters** that enable query result caching. The task is scheduled to run **every 5 minutes** and uses the **analytics** warehouse. The session parameters **`enable_query_result_cache = 1`** and **`query_result_cache_min_execute_secs = 5`** are specified **after all other task parameters**, enabling the query result cache for queries that take at least 5 seconds to execute. This can **improve performance** for subsequent executions of the same task if the underlying data hasn't changed.

0 commit comments

Comments
 (0)