Skip to content

Commit e5f12e6

Browse files
authored
Merge pull request #56 from stackql/feature/ja-updates
Feature/ja updates
2 parents 081d61e + ae3521b commit e5f12e6

File tree

149 files changed

+8505
-1577
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

149 files changed

+8505
-1577
lines changed

.github/workflows/prod-web-deploy.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818

1919
- uses: actions/setup-node@v4
2020
with:
21-
node-version: 18
21+
node-version: 20
2222
cache: yarn
2323
cache-dependency-path: website/yarn.lock
2424

.github/workflows/preview-web-deploy.yml renamed to .github/workflows/test-web-deploy.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818

1919
- uses: actions/setup-node@v4
2020
with:
21-
node-version: 18
21+
node-version: 20
2222
cache: yarn
2323
cache-dependency-path: website/yarn.lock
2424

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@ testcreds/
1212
*.log
1313
venv/
1414
.venv/
15+
nohup.out
16+
17+
/.ruff_cache
1518

1619
# Byte-compiled / optimized / DLL files
1720
__pycache__/
@@ -87,3 +90,4 @@ docs/_build/
8790

8891
venv/
8992
.DS_Store
93+
myenv/

CHANGELOG.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,14 @@
11
# Changelog
22

3+
## 1.8.5 (2025-06-30)
4+
5+
- Added support for resource scoped variables
6+
- Added developer credits in `info`
7+
38
## 1.8.3 (2025-02-08)
49

510
- Added walkthrough for databricks bootstrap on aws.
6-
- Bugfix for expport variables on dry run.
11+
- Bugfix for export variables on dry run.
712

813
## 1.8.2 (2025-01-16)
914

MANIFEST.in

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
# MANIFEST.in
22
include README.rst
33
recursive-include stackql_deploy/templates *.template
4+
include stackql_deploy/inc/contributors.csv

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -265,14 +265,16 @@ To distribute **stackql-deploy** on PyPI, you'll need to ensure that you have al
265265
266266
First, ensure you have the latest versions of `setuptools` and `wheel` installed:
267267
268-
```
268+
```bash
269+
python3 -m venv venv
270+
source venv/bin/activate
269271
# pip install --upgrade setuptools wheel
270272
pip install --upgrade build
271273
```
272274
273275
Then, navigate to your project root directory and build the distribution files:
274276
275-
```
277+
```bash
276278
rm dist/stackql_deploy*
277279
python3 -m build
278280
# or

examples/databricks/all-purpose-cluster/README.md renamed to examples/databricks/classic/README.md

Lines changed: 10 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Now, is is convenient to use environment variables for context. Note that for o
2626
```bash
2727
#!/usr/bin/env bash
2828

29-
export ASSETS_AWS_REGION='us-east-1' # or wherever you want
29+
export AWS_REGION='us-east-1' # or wherever you want
3030
export AWS_ACCOUNT_ID='<your aws account ID>'
3131
export DATABRICKS_ACCOUNT_ID='<your databricks account ID>'
3232
export DATABRICKS_AWS_ACCOUNT_ID='<your databricks aws account ID>'
@@ -46,28 +46,20 @@ export AWS_ACCESS_KEY_ID='<your aws access key id per aws cli>'
4646
Now, let us do some sanity checks and housekeeping with `stackql`. This is purely optional. From the root of this repository:
4747

4848
```
49-
5049
source examples/databricks/all-purpose-cluster/convenience.sh
51-
5250
stackql shell
53-
5451
```
5552

5653
This will start a `stackql` interactive shell. Here are some commands you can run (I will not place output here, that will be shared in a corresponding video):
5754

5855

5956
```sql
60-
6157
registry pull databricks_account v24.12.00279;
62-
6358
registry pull databricks_workspace v24.12.00279;
6459

6560
-- This will fail if accounts, subscription, or credentials are in error.
6661
select account_id FROM databricks_account.provisioning.credentials WHERE account_id = '<your databricks account id>';
67-
68-
6962
select account_id, workspace_name, workspace_id, workspace_status from databricks_account.provisioning.workspaces where account_id = '<your databricks account id>';
70-
7163
```
7264

7365
For extra credit, you can (asynchronously) delete the unnecessary workspace with `delete from databricks_account.provisioning.workspaces where account_id = '<your databricks account id>' and workspace_id = '<workspace id>';`, where you obtain the workspace id from the above query. I have noted that due to some reponse caching it takes a while to disappear from select queries (much longer than disappearance from the web page), and you may want to bounce the `stackql` session to hurry things along. This is not happening on the `stackql` side, but session bouncing forces a token refresh which can help cache busting.
@@ -77,20 +69,20 @@ For extra credit, you can (asynchronously) delete the unnecessary workspace with
7769
Time to get down to business. From the root of this repository:
7870

7971
```bash
80-
72+
python3 -m venv myenv
8173
source examples/databricks/all-purpose-cluster/convenience.sh
82-
83-
source ./.venv/bin/activate
84-
85-
74+
source venv/bin/activate
75+
pip install stackql-deploy
8676
```
8777

78+
> alternatively set the `AWS_REGION`, `AWS_ACCOUNT_ID`, `DATABRICKS_ACCOUNT_ID`, `DATABRICKS_AWS_ACCOUNT_ID` along with provider credentials `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `DATABRICKS_CLIENT_ID`, `DATABRICKS_CLIENT_SECRET`
79+
8880
Then, do a dry run (good for catching **some** environmental issues):
8981

9082
```bash
9183
stackql-deploy build \
9284
examples/databricks/all-purpose-cluster dev \
93-
-e AWS_REGION=${ASSETS_AWS_REGION} \
85+
-e AWS_REGION=${AWS_REGION} \
9486
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
9587
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
9688
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
@@ -105,7 +97,7 @@ Now, let use do it for real:
10597
```bash
10698
stackql-deploy build \
10799
examples/databricks/all-purpose-cluster dev \
108-
-e AWS_REGION=${ASSETS_AWS_REGION} \
100+
-e AWS_REGION=${AWS_REGION} \
109101
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
110102
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
111103
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
@@ -128,7 +120,7 @@ We can also use `stackql-deploy` to assess if our infra is shipshape:
128120
```bash
129121
stackql-deploy test \
130122
examples/databricks/all-purpose-cluster dev \
131-
-e AWS_REGION=${ASSETS_AWS_REGION} \
123+
-e AWS_REGION=${AWS_REGION} \
132124
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
133125
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
134126
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
@@ -151,7 +143,7 @@ Now, let us teardown our `stackql-deploy` managed infra:
151143
```bash
152144
stackql-deploy teardown \
153145
examples/databricks/all-purpose-cluster dev \
154-
-e AWS_REGION=${ASSETS_AWS_REGION} \
146+
-e AWS_REGION=${AWS_REGION} \
155147
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
156148
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
157149
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \

0 commit comments

Comments
 (0)