Action ID: marketplace/azure/microsoft-365-certification-quick-evaluation
Author: Unknown
Publisher: azure
Repository: github.com/azure/microsoft-365-certification-quick-evaluation
Microsoft 365 certiquick assessments action helps you get immediately compliance result by report or by deployments.
| Name | Required | Description |
|---|---|---|
report-name |
Optional | name of the compliance report |
deployment-ids |
Optional | deployment ids, array of string in json format |
name: 'Microsoft 365 certification quick evaluation'
description: 'Microsoft 365 certiquick assessments action helps you get immediately compliance result by report or by deployments.'
inputs:
report-name:
description: 'name of the compliance report'
required: false
deployment-ids:
description: 'deployment ids, array of string in json format'
required: false
runs:
using: 'node16'
main: 'lib/index.js'
Action ID: marketplace/github/stale
Author: GitHub
Publisher: github
Repository: github.com/github/stale
Close issues and pull requests with no recent activity
| Name | Required | Description |
|---|---|---|
repo-token |
Required | Token for the repository. Can be passed in using `{{ secrets.GITHUB_TOKEN }}`. |
stale-issue-message |
Optional | The message to post on the issue when tagging it. If none provided, will not mark issues stale. |
stale-pr-message |
Optional | The message to post on the pr when tagging it. If none provided, will not mark pull requests stale. |
close-issue-message |
Optional | The message to post on the issue when closing it. If none provided, will not comment when closing an issue. |
close-pr-message |
Optional | The message to post on the pr when closing it. If none provided, will not comment when closing a pull requests. |
days-before-stale |
Optional | The number of days old an issue can be before marking it stale. Set to -1 to never mark issues or pull requests as stale automatically. Default: 60 |
days-before-close |
Optional | The number of days to wait to close an issue or pull request after it being marked stale. Set to -1 to never close stale issues. Default: 7 |
stale-issue-label |
Optional | The label to apply when an issue is stale. Default: Stale |
close-issue-label |
Optional | The label to apply when an issue is closed. |
exempt-issue-labels |
Optional | The labels to apply when an issue is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2") |
stale-pr-label |
Optional | The label to apply when a pull request is stale. Default: Stale |
close-pr-label |
Optional | The label to apply when a pull request is closed. |
exempt-pr-labels |
Optional | The labels to apply when a pull request is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2") |
only-labels |
Optional | Only issues or pull requests with all of these labels are checked if stale. Defaults to `[]` (disabled) and can be a comma-separated list of labels. |
operations-per-run |
Optional | The maximum number of operations per run, used to control rate limiting. Default: 30 |
remove-stale-when-updated |
Optional | Remove stale labels from issues when they are updated or commented on. Default: True |
debug-only |
Optional | Run the processor in debug mode without actually performing any operations on live issues. |
ascending |
Optional | The order to get issues or pull requests. Defaults to false, which is descending |
skip-stale-pr-message |
Optional | Skip adding stale message when marking a pull request as stale. |
skip-stale-issue-message |
Optional | Skip adding stale message when marking an issue as stale. |
name: 'Close Stale Issues'
description: 'Close issues and pull requests with no recent activity'
author: 'GitHub'
inputs:
repo-token:
description: 'Token for the repository. Can be passed in using `{{ secrets.GITHUB_TOKEN }}`.'
required: true
stale-issue-message:
description: 'The message to post on the issue when tagging it. If none provided, will not mark issues stale.'
stale-pr-message:
description: 'The message to post on the pr when tagging it. If none provided, will not mark pull requests stale.'
close-issue-message:
description: 'The message to post on the issue when closing it. If none provided, will not comment when closing an issue.'
close-pr-message:
description: 'The message to post on the pr when closing it. If none provided, will not comment when closing a pull requests.'
days-before-stale:
description: 'The number of days old an issue can be before marking it stale. Set to -1 to never mark issues or pull requests as stale automatically.'
default: 60
days-before-close:
description: 'The number of days to wait to close an issue or pull request after it being marked stale. Set to -1 to never close stale issues.'
default: 7
stale-issue-label:
description: 'The label to apply when an issue is stale.'
default: 'Stale'
close-issue-label:
description: 'The label to apply when an issue is closed.'
exempt-issue-labels:
description: 'The labels to apply when an issue is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2")'
default: ''
stale-pr-label:
description: 'The label to apply when a pull request is stale.'
default: 'Stale'
close-pr-label:
description: 'The label to apply when a pull request is closed.'
exempt-pr-labels:
description: 'The labels to apply when a pull request is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2")'
default: ''
only-labels:
description: 'Only issues or pull requests with all of these labels are checked if stale. Defaults to `[]` (disabled) and can be a comma-separated list of labels.'
default: ''
operations-per-run:
description: 'The maximum number of operations per run, used to control rate limiting.'
default: 30
remove-stale-when-updated:
description: 'Remove stale labels from issues when they are updated or commented on.'
default: true
debug-only:
description: 'Run the processor in debug mode without actually performing any operations on live issues.'
default: false
ascending:
description: 'The order to get issues or pull requests. Defaults to false, which is descending'
default: false
skip-stale-pr-message:
description: 'Skip adding stale message when marking a pull request as stale.'
default: false
skip-stale-issue-message:
description: 'Skip adding stale message when marking an issue as stale.'
default: false
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/pgrimaud/action-shopify
Author: Pierre Grimaud
Publisher: pgrimaud
Repository: github.com/pgrimaud/action-shopify
Deploy Shopify theme with Theme Kit
name: 'Deploy Shopify theme'
author: 'Pierre Grimaud'
description: 'Deploy Shopify theme with Theme Kit'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'shopping-bag'
color: 'green'
Action ID: marketplace/JamesIves/f45-lionheart-strava-importer
Author: James Ives <iam@jamesiv.es> (https://jives.dev)
Publisher: JamesIves
Repository: github.com/JamesIves/f45-lionheart-strava-importer
Fetches a workout from the F45 Lionheart API and pushes it to Strava. 🦁
| Name | Required | Description |
|---|---|---|
STRAVA_CLIENT_ID |
Required | Your Strava Client ID |
STRAVA_CLIENT_SECRET |
Required | Your Strava Client Secret |
STRAVA_REFRESH_TOKEN |
Required | Your Strava Refresh Token |
F45_CLASS_DATE |
Required | The date that the F45 Class was taken, this should be in the format YYYY-MM-DD. Default: 2025-03-14 |
F45_CLASS_TIME |
Required | The time of the F45 Class, this should be in 24 hour format and mirror the time the class was scheduled to start. Default: 06:30 |
F45_STUDIO_CODE |
Required | The F45 Studio Code where the class was taken |
F45_USER_ID |
Required | Your F45 User ID |
F45_LIONHEART_SERIAL_NUMBER |
Required | Your F45 Lionheart Serial Number Default: 366418 |
DRY_RUN |
Optional | If set to true the script will not push the workout to Strava. Default: false |
| Name | Description |
|---|---|
deployment-status |
The status of the deployment that indicates if the run failed or passed. Possible outputs include: success|failed|skipped |
name: "F45 Lionheart Strava Importer"
description: "Fetches a workout from the F45 Lionheart API and pushes it to Strava. 🦁"
author: "James Ives <iam@jamesiv.es> (https://jives.dev)"
runs:
using: "node20"
main: "dist/index.js"
branding:
icon: "heart"
color: "blue"
inputs:
STRAVA_CLIENT_ID:
description: "Your Strava Client ID"
required: true
STRAVA_CLIENT_SECRET:
description: "Your Strava Client Secret"
required: true
STRAVA_REFRESH_TOKEN:
description: "Your Strava Refresh Token"
required: true
F45_CLASS_DATE:
description: "The date that the F45 Class was taken, this should be in the format YYYY-MM-DD."
required: true
default: "2025-03-14"
F45_CLASS_TIME:
description: "The time of the F45 Class, this should be in 24 hour format and mirror the time the class was scheduled to start."
required: true
default: "06:30"
F45_STUDIO_CODE:
description: "The F45 Studio Code where the class was taken"
required: true
default: ""
F45_USER_ID:
description: "Your F45 User ID"
required: true
default: ""
F45_LIONHEART_SERIAL_NUMBER:
description: "Your F45 Lionheart Serial Number"
required: true
default: "366418"
DRY_RUN:
description: "If set to true the script will not push the workout to Strava."
required: false
default: "false"
outputs:
deployment-status:
description: "The status of the deployment that indicates if the run failed or passed. Possible outputs include: success|failed|skipped"
Action ID: marketplace/pypa/gh-action-pip-audit
Author: William Woodruff <william@trailofbits.com>
Publisher: pypa
Repository: github.com/pypa/gh-action-pip-audit
Use pip-audit to scan Python dependencies for known vulnerabilities
| Name | Required | Description |
|---|---|---|
summary |
Optional | render a Markdown summary of the audit (default true) Default: True |
no-deps |
Optional | don't do any dependency resolution (requires fully pinned requirements) (default false) |
require-hashes |
Optional | enforce hashes (requirements-style inputs only) (default false) |
vulnerability-service |
Optional | the vulnerability service to use (PyPI or OSV, defaults to PyPI) Default: PyPI |
inputs |
Optional | the inputs to audit, whitespace separated (defaults to current path) |
virtual-environment |
Optional | the virtual environment to audit within (default none) |
local |
Optional | for environmental audits, consider only packages marked local (default false) |
index-url |
Optional | the base URL for the PEP 503-compatible package index to use |
extra-index-urls |
Optional | extra PEP 503-compatible indexes to use, whitespace separated |
ignore-vulns |
Optional | vulnerabilities to explicitly exclude, if present (whitespace separated) |
disable-pip |
Optional | disable pip |
locked |
Optional | audit lock files from the local Python project |
internal-be-careful-allow-failure |
Optional | don't fail the job if the audit fails (default false) |
internal-be-careful-extra-flags |
Optional | extra flags to be passed in to pip-audit |
| Name | Description |
|---|---|
internal-be-careful-output |
the column-formatted output from pip-audit, wrapped as base64 |
name: "gh-action-pip-audit"
author: "William Woodruff <william@trailofbits.com>"
description: "Use pip-audit to scan Python dependencies for known vulnerabilities"
inputs:
summary:
description: "render a Markdown summary of the audit (default true)"
required: false
default: true
no-deps:
description: "don't do any dependency resolution (requires fully pinned requirements) (default false)"
required: false
default: false
require-hashes:
description: "enforce hashes (requirements-style inputs only) (default false)"
required: false
default: false
vulnerability-service:
description: "the vulnerability service to use (PyPI or OSV, defaults to PyPI)"
required: false
default: "PyPI"
inputs:
description: "the inputs to audit, whitespace separated (defaults to current path)"
required: false
default: ""
virtual-environment:
description: "the virtual environment to audit within (default none)"
required: false
default: ""
local:
description: "for environmental audits, consider only packages marked local (default false)"
required: false
default: false
index-url:
description: "the base URL for the PEP 503-compatible package index to use"
required: false
default: ""
extra-index-urls:
description: "extra PEP 503-compatible indexes to use, whitespace separated"
required: false
default: ""
ignore-vulns:
description: "vulnerabilities to explicitly exclude, if present (whitespace separated)"
required: false
default: ""
disable-pip:
description: "disable pip"
required: false
default: false
locked:
description: "audit lock files from the local Python project"
required: false
default: false
internal-be-careful-allow-failure:
description: "don't fail the job if the audit fails (default false)"
required: false
default: false
internal-be-careful-extra-flags:
description: "extra flags to be passed in to pip-audit"
required: false
default: ""
outputs:
internal-be-careful-output:
description: "the column-formatted output from pip-audit, wrapped as base64"
value: "${{ steps.pip-audit.outputs.output }}"
runs:
using: "composite"
steps:
- name: Set up pip-audit
run: |
# NOTE: Sourced, not executed as a script.
source "${{ github.action_path }}/setup/setup.bash"
env:
GHA_PIP_AUDIT_VIRTUAL_ENVIRONMENT: "${{ inputs.virtual-environment }}"
shell: bash
- name: Run pip-audit
id: pip-audit
run: |
# NOTE: Sourced, not executed as a script.
source "${{ github.action_path }}/setup/venv.bash"
python "${{ github.action_path }}/action.py" "$GHA_PIP_AUDIT_INPUTS"
env:
GHA_PIP_AUDIT_INPUTS: "${{ inputs.inputs }}"
GHA_PIP_AUDIT_SUMMARY: "${{ inputs.summary }}"
GHA_PIP_AUDIT_NO_DEPS: "${{ inputs.no-deps }}"
GHA_PIP_AUDIT_REQUIRE_HASHES: "${{ inputs.require-hashes }}"
GHA_PIP_AUDIT_VULNERABILITY_SERVICE: "${{ inputs.vulnerability-service }}"
GHA_PIP_AUDIT_VIRTUAL_ENVIRONMENT: "${{ inputs.virtual-environment }}"
GHA_PIP_AUDIT_LOCAL: "${{ inputs.local }}"
GHA_PIP_AUDIT_INDEX_URL: "${{ inputs.index-url }}"
GHA_PIP_AUDIT_EXTRA_INDEX_URLS: "${{ inputs.extra-index-urls }}"
GHA_PIP_AUDIT_IGNORE_VULNS: "${{ inputs.ignore-vulns }}"
GHA_PIP_DISABLE_PIP: "${{ inputs.disable-pip }}"
GHA_PIP_AUDIT_LOCKED: "${{ inputs.locked }}"
GHA_PIP_AUDIT_INTERNAL_BE_CAREFUL_ALLOW_FAILURE: "${{ inputs.internal-be-careful-allow-failure }}"
GHA_PIP_AUDIT_INTERNAL_BE_CAREFUL_EXTRA_FLAGS: "${{ inputs.internal-be-careful-extra-flags }}"
shell: bash
Action ID: marketplace/creyD/autoflake_action
Author: Conrad Großer <grosserconrad@gmail.com>
Publisher: creyD
Repository: github.com/creyD/autoflake_action
Automatically runs autoflake on all your changes.
| Name | Required | Description |
|---|---|---|
commit_message |
Optional | Commit message Default: Removed unused imports and variables |
commit_options |
Optional | Commit options |
file_pattern |
Optional | File pattern used for `git add` Default: * |
checkpath |
Optional | Path autoflake checks Default: . |
options |
Optional | Parameters for autoflake |
dry |
Optional | Should this script apply autoflake directly or just warn? |
no_commit |
Optional | Can be used to avoid committing the changes |
github_token |
Optional | GitHub Token or PAT token used to authenticate against a repository Default: ${{ github.token }} |
name: Autoflake Action
description: Automatically runs autoflake on all your changes.
author: Conrad Großer <grosserconrad@gmail.com>
inputs:
commit_message:
description: Commit message
required: false
default: 'Removed unused imports and variables'
commit_options:
description: Commit options
required: false
file_pattern:
description: File pattern used for `git add`
required: false
default: '*'
checkpath:
description: Path autoflake checks
required: false
default: '.'
options:
description: Parameters for autoflake
required: false
default: ''
dry:
description: Should this script apply autoflake directly or just warn?
required: false
default: false
no_commit:
description: Can be used to avoid committing the changes
required: false
default: false
github_token:
description: GitHub Token or PAT token used to authenticate against a repository
required: false
default: ${{ github.token }}
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'delete'
color: 'green'
Action ID: marketplace/andstor/docker2apptainer
Author: Unknown
Publisher: andstor
Repository: github.com/andstor/docker2apptainer
Convert a Docker container to an Apptainer container
| Name | Required | Description |
|---|---|---|
docker-image |
Required | Docker image |
save-path |
Required | Path to save the Apptainer sif file |
| Name | Description |
|---|---|
image |
Apptainer sif image file |
name: 'Docker2Apptainer'
description: 'Convert a Docker container to an Apptainer container'
inputs:
docker-image:
description: 'Docker image'
required: true
save-path:
description: 'Path to save the Apptainer sif file'
required: true
outputs:
image:
description: 'Apptainer sif image file'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/wei/git-sync
Author: Wei He <github@weispot.com>
Publisher: wei
Repository: github.com/wei/git-sync
🔃 Sync between two independent repositories
| Name | Required | Description |
|---|---|---|
source_repo |
Required | GitHub repo slug or full url |
source_branch |
Required | Branch name to sync from |
destination_repo |
Required | GitHub repo slug or full url |
destination_branch |
Required | Branch name to sync to |
ssh_private_key |
Optional | SSH key used to authenticate with source and destination ssh urls provided (optional if public or https url with authentication) |
source_ssh_private_key |
Optional | SSH key used to authenticate with source ssh url provided (optional if public or https url with authentication) |
destination_ssh_private_key |
Optional | SSH key used to authenticate with destination ssh url provided (optional if public or https url with authentication) |
name: Git Sync Action
author: Wei He <github@weispot.com>
description: 🔃 Sync between two independent repositories
branding:
icon: 'git-branch'
color: 'gray-dark'
inputs:
source_repo:
description: GitHub repo slug or full url
required: true
source_branch:
description: Branch name to sync from
required: true
destination_repo:
description: GitHub repo slug or full url
required: true
destination_branch:
description: Branch name to sync to
required: true
ssh_private_key:
description: SSH key used to authenticate with source and destination ssh urls provided (optional if public or https url with authentication)
required: false
source_ssh_private_key:
description: SSH key used to authenticate with source ssh url provided (optional if public or https url with authentication)
required: false
destination_ssh_private_key:
description: SSH key used to authenticate with destination ssh url provided (optional if public or https url with authentication)
required: false
runs:
using: 'docker'
image: 'Dockerfile'
env:
SSH_PRIVATE_KEY: ${{ inputs.ssh_private_key }}
SOURCE_SSH_PRIVATE_KEY: ${{ inputs.source_ssh_private_key }}
DESTINATION_SSH_PRIVATE_KEY: ${{ inputs.destination_ssh_private_key }}
args:
- ${{ inputs.source_repo }}
- ${{ inputs.source_branch }}
- ${{ inputs.destination_repo }}
- ${{ inputs.destination_branch }}
Action ID: marketplace/dflook/tofu-destroy-workspace
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-destroy-workspace
Delete an OpenTofu workspace, destroying all resources
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu root module directory. Default: . |
workspace |
Required | The name of the OpenTofu workspace to destroy and delete. |
variables |
Optional | Variables to set for the tofu destroy. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `destroy-failed` - The OpenTofu destroy operation failed. - `state-locked` - The OpenTofu state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
name: tofu-destroy-workspace
description: Delete an OpenTofu workspace, destroying all resources
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu root module directory.
required: false
default: "."
workspace:
description: The name of the OpenTofu workspace to destroy and delete.
required: true
variables:
description: |
Variables to set for the tofu destroy. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `destroy-failed` - The OpenTofu destroy operation failed.
- `state-locked` - The OpenTofu state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/destroy-workspace.sh
branding:
icon: globe
color: purple
Action ID: marketplace/Burnett01/rsync-deployments
Author: Burnett01
Publisher: Burnett01
Repository: github.com/Burnett01/rsync-deployments
GitHub Action for deploying code via rsync over ssh
| Name | Required | Description |
|---|---|---|
switches |
Required | The switches |
rsh |
Optional | The remote shell argument |
legacy_allow_rsa_hostkeys |
Optional | Enables support for legacy RSA host keys on OpenSSH 8.8+ Default: false |
strict_hostkeys_checking |
Optional | Controls strict host keys checking Default: false |
path |
Optional | The local path |
remote_path |
Required | The remote path |
remote_host |
Required | The remote host |
remote_port |
Optional | The remote port Default: 22 |
remote_user |
Required | The remote user |
remote_key |
Required | The remote key |
remote_key_pass |
Optional | The remote key passphrase |
debug |
Optional | Debug the action Default: false |
name: 'Rsync Deployments Action'
description: 'GitHub Action for deploying code via rsync over ssh'
author: 'Burnett01'
inputs:
switches:
description: 'The switches'
required: true
rsh:
description: 'The remote shell argument'
required: false
default: ''
legacy_allow_rsa_hostkeys:
description: 'Enables support for legacy RSA host keys on OpenSSH 8.8+'
required: false
default: 'false'
strict_hostkeys_checking:
description: 'Controls strict host keys checking'
required: false
default: 'false'
path:
description: 'The local path'
required: false
default: ''
remote_path:
description: 'The remote path'
required: true
remote_host:
description: 'The remote host'
required: true
remote_port:
description: 'The remote port'
required: false
default: 22
remote_user:
description: 'The remote user'
required: true
remote_key:
description: 'The remote key'
required: true
remote_key_pass:
description: 'The remote key passphrase'
required: false
default: ''
debug:
description: 'Debug the action'
required: false
default: 'false'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'send'
color: 'gray-dark'
Action ID: marketplace/azure/webapps-container-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/webapps-container-deploy
Deploy Container Web Apps to Azure. github.com/Azure/Actions
| Name | Required | Description |
|---|---|---|
app-name |
Required | Name of the Azure Web App |
slot-name |
Optional | Enter an existing Slot other than the Production slot Default: production |
images |
Required | Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For multi-container scenario multiple container image names can be provided (multi-line separated) |
configuration-file |
Optional | Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container scenario |
container-command |
Optional | Enter the start up command. For ex. dotnet run or dotnet filename.dll |
| Name | Description |
|---|---|
webapp-url |
URL to work with your webapp |
# This action is ARCHIVED, update your workflows to use the new action azure/webapps-deploy@v2 which can deploy to both Webapps and Webapp for containers.
name: 'Azure WebApp Container'
description: 'Deploy Container Web Apps to Azure. github.com/Azure/Actions'
inputs:
app-name:
description: 'Name of the Azure Web App'
required: true
deprecationMessage: 'This action is ARCHIVED and will not receive any updates, update your workflows to use azure/webapps-deploy@v2'
slot-name:
description: 'Enter an existing Slot other than the Production slot'
required: false
default: 'production'
images:
description: "Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For multi-container scenario multiple container image names can be provided (multi-line separated)"
required: true
configuration-file:
description: 'Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container scenario'
required: false
container-command:
description: 'Enter the start up command. For ex. dotnet run or dotnet filename.dll'
required: false
outputs:
webapp-url:
description: 'URL to work with your webapp'
branding:
icon: 'container-webapp.svg'
color: 'blue'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/sobolevn/actions-chatops
Author: Hamel Husain
Publisher: sobolevn
Repository: github.com/sobolevn/actions-chatops
Listen for special comments, or chatops commands in the comments of a PR.
| Name | Required | Description |
|---|---|---|
APP_PEM |
Optional | string version of your PEM file used to authenticate as a GitHub App |
APP_ID |
Optional | you GITHUB App ID. |
TRIGGER_PHRASE |
Required | this is the phrase in a PR comment that you want to trigger downstream Actions. Example - "/deploy-app-test" |
INDICATOR_LABEL |
Optional | label that wil be added to the PR if a triggering comment is detected. This is used to trigger downstream Actions with the right context of the PR. |
TEST_EVENT_PATH |
Optional | An alternate place to fetch the payload for testing and debugging when making changes to this Action. This is set to they system environment variable $GITHUB_EVENT_PATH by default. |
| Name | Description |
|---|---|
TRAILING_LINE |
the text that immediately follows the triggering phrase that is on the same line. For example, "/trigger-phrase foo bar\n next line" will emit the value "foo bar" This is intended to be used as arguments for downstream actions. |
TRAILING_TOKEN |
this is the next token that immediately follows the triggering phrase that is on the same line. For example, "/trigger-phrase foo bar" will emit the value "foo". This is intended to be used as arguments for downstream actions. |
BOOL_TRIGGERED |
true or false depending on if the trigger phrase was detected and this is a pull request. |
PULL_REQUEST_NUMBER |
the number of the pull request |
COMMENTER_USERNAME |
The GitHub username of the person that made the triggering comment in the PR. |
BRANCH_NAME |
The name of the branch corresponding to the PR. |
SHA |
The SHA of the branch on the PR at the time the triggering comment was made. |
APP_INSTALLATION_TOKEN |
The installation access token for the GitHub App corresponding to and the current repository. This is only retrieved if the `APP_PEM` and `APP_ID` inputs are supplied. |
name: 'ChatOps For Pull Requests'
description: Listen for special comments, or chatops commands in the comments of a PR.
author: Hamel Husain
inputs:
APP_PEM:
description: string version of your PEM file used to authenticate as a GitHub App
required: false
APP_ID:
description: you GITHUB App ID.
required: false
TRIGGER_PHRASE:
description: this is the phrase in a PR comment that you want to trigger downstream Actions. Example - "/deploy-app-test"
required: true
INDICATOR_LABEL:
description: label that wil be added to the PR if a triggering comment is detected. This is used to trigger downstream Actions with the right context of the PR.
required: false
default: ""
TEST_EVENT_PATH:
description: An alternate place to fetch the payload for testing and debugging when making changes to this Action. This is set to they system environment variable $GITHUB_EVENT_PATH by default.
require: false
default: ""
outputs:
TRAILING_LINE:
description: the text that immediately follows the triggering phrase that is on the same line. For example, "/trigger-phrase foo bar\n next line" will emit the value "foo bar" This is intended to be used as arguments for downstream actions.
TRAILING_TOKEN:
description: this is the next token that immediately follows the triggering phrase that is on the same line. For example, "/trigger-phrase foo bar" will emit the value "foo". This is intended to be used as arguments for downstream actions.
BOOL_TRIGGERED:
description: true or false depending on if the trigger phrase was detected and this is a pull request.
PULL_REQUEST_NUMBER:
description: the number of the pull request
COMMENTER_USERNAME:
description: The GitHub username of the person that made the triggering comment in the PR.
BRANCH_NAME:
description: The name of the branch corresponding to the PR.
SHA:
description: The SHA of the branch on the PR at the time the triggering comment was made.
APP_INSTALLATION_TOKEN:
description: The installation access token for the GitHub App corresponding to and the current repository. This is only retrieved if the `APP_PEM` and `APP_ID` inputs are supplied.
branding:
color: 'gray-dark'
icon: 'message-square'
runs:
using: 'docker'
image: 'docker://hamelsmu/chatops'
Action ID: marketplace/github/issue-labeler
Author: GitHub
Publisher: github
Repository: github.com/github/issue-labeler
Labels issues automatically based on their body
| Name | Required | Description |
|---|---|---|
repo-token |
Optional | The GITHUB_TOKEN secret Default: ${{ github.token }} |
configuration-path |
Required | The path to the label configuration file. If the file doesn't exist at the specified path on the runner, action will read from the source repository via the Github API. |
enable-versioned-regex |
Required | Controls if versioned regex templates are being used |
versioned-regex |
Optional | The regex version number to use. Only required if using versioned regex files |
not-before |
Optional | Is optional and will result in any issues prior to this timestamp to be ignored |
body-missing-regex-label |
Optional | The name of the label that should be added to an issue where the specified `version-regex` can not be found. |
include-title |
Optional | Include the title in the regex target Default: 0 |
include-body |
Optional | Include the body in the regex target Default: 1 |
sync-labels |
Optional | Remove the label from the issue if the label regex does not match Default: 0 |
issue-number |
Optional | The number of the issue/PR to label Default: ${{ github.event.issue.number || github.event.pull_request.number }} |
| Name | Description |
|---|---|
labels-added |
The labels that were added by the action, as a stringified array. |
labels-removed |
The labels that were removed by the action, as a stringified array. |
name: 'RegEx Issue Labeler'
description: 'Labels issues automatically based on their body'
author: 'GitHub'
inputs:
repo-token:
description: 'The GITHUB_TOKEN secret'
required: false
default: '${{ github.token }}'
configuration-path:
description: "The path to the label configuration file. If the file doesn't exist at the specified path on the runner, action will read from the source repository via the Github API."
required: true
enable-versioned-regex:
description: 'Controls if versioned regex templates are being used'
required: true
versioned-regex:
description: 'The regex version number to use. Only required if using versioned regex files'
required: false
not-before:
description: 'Is optional and will result in any issues prior to this timestamp to be ignored'
required: false
body-missing-regex-label:
description: 'The name of the label that should be added to an issue where the specified `version-regex` can not be found.'
required: false
include-title:
description: 'Include the title in the regex target'
required: false
default: "0"
include-body:
description: 'Include the body in the regex target'
required: false
default: "1"
sync-labels:
description: 'Remove the label from the issue if the label regex does not match'
required: false
default: "0"
issue-number:
description: 'The number of the issue/PR to label'
required: false
default: ${{ github.event.issue.number || github.event.pull_request.number }}
outputs:
labels-added:
description: 'The labels that were added by the action, as a stringified array.'
labels-removed:
description: 'The labels that were removed by the action, as a stringified array.'
runs:
using: 'node20'
main: 'lib/index.js'
branding:
icon: 'activity'
color: 'blue'
Action ID: marketplace/github/stale-repos
Author: github
Publisher: github
Repository: github.com/github/stale-repos
A GitHub Action to identify stale repos within an organization
| Name | Description |
|---|---|
inactiveRepos |
Inactive Repos in the organization |
---
name: "stale-repos"
author: "github"
description: "A GitHub Action to identify stale repos within an organization"
outputs:
inactiveRepos:
description: "Inactive Repos in the organization"
runs:
using: "docker"
image: "docker://ghcr.io/github/stale_repos:v6"
branding:
icon: "check-square"
color: "white"
Action ID: marketplace/sobolevn/repo-visualizer
Author: GitHub OCTO
Publisher: sobolevn
Repository: github.com/sobolevn/repo-visualizer
A GitHub Action that creates an SVG diagram of your repo
| Name | Required | Description |
|---|---|---|
output_file |
Optional | A path (relative to the root of your repo) to where you would like the diagram to live. For example: images/diagram.svg. Default: diagram.svg |
excluded_paths |
Optional | A list of paths to exclude from the diagram, separated by commas. For example: dist,node_modules |
name: "Repo Visualizer"
description: "A GitHub Action that creates an SVG diagram of your repo"
author: "GitHub OCTO"
inputs:
output_file:
description: "A path (relative to the root of your repo) to where you would like the diagram to live. For example: images/diagram.svg. Default: diagram.svg"
required: false
excluded_paths:
description: "A list of paths to exclude from the diagram, separated by commas. For example: dist,node_modules"
required: false
runs:
using: "node12"
main: "index.js"
branding:
color: "purple"
icon: "target"
Action ID: marketplace/azure/build-vm-image
Author: Unknown
Publisher: azure
Repository: github.com/azure/build-vm-image
Create custom virtual machine images that contain artifacts built in CI workflows
| Name | Required | Description |
|---|---|---|
location |
Optional | This is the Azure region in which the Image Builder will run. |
resource-group-name |
Required | This is the Resource Group where the temporary Imagebuilder Template resource will be created. |
image-builder-template-name |
Optional | The name of the image builder template resource to be used for creating and running the Image builder service. |
build-timeout-in-minutes |
Optional | The value is an integer which is used as timeout in minutes for running the image build. Default: 240 |
vm-size |
Optional | You can override the VM size, from the default value i.e. Standard_D1_v2. Default: Standard_D1_v2 |
managed-identity |
Optional | The identity that will be used to do the role assignment and resource creation |
source-image-type |
Optional | [ PlatformImage | SharedImageGallery | ManagedImage ] Default: PlatformImage |
source-os-type |
Required | OS types supported: [ linux | windows ]. |
source-image |
Optional | Value of source-image supported by Azure Image Builder. |
customizer-source |
Optional | This takes the path to a directory or a file in the runner. By default, it points to the default download directory of the github runner. |
customizer-script |
Optional | The customer can enter multi inline powershell or shell commands and use variables to point to directories inside the downloaded location. |
customizer-windows-update |
Optional | The value is boolean and set to false by default. This value is for Windows images only, the image builder will run Windows Update at the end of the customizations and also handle the reboots it requires. |
dist-type |
Optional | ManagedImage | SharedImageGallery | VHD Default: ManagedImage |
dist-resource-id |
Optional | Image Resource Id to be created by AIB |
dist-location |
Optional | location of Image created by AIB |
run-output-name |
Optional | Every Image builder run is identified with a unique run id. |
dist-image-tags |
Optional | The values set will be used to set the user defined tags on the custom image artifact created. |
| Name | Description |
|---|---|
imagebuilder-run-status |
This value of this output will be the value of Image builder Run status set to either Succeeded or Failed based on the runState returned by Azure Image Builder. |
run-output-name |
The action emits output value run-output-name which can be used to get the details of the Image Builder Run. |
custom-image-uri |
Upon successful completion, The github action emits the URI or resource id of the Image distributed. |
name: "Build Azure Virtual Machine Image"
description: "Create custom virtual machine images that contain artifacts built in CI workflows"
inputs:
#general inputs
location:
description: 'This is the Azure region in which the Image Builder will run.'
resource-group-name:
description: 'This is the Resource Group where the temporary Imagebuilder Template resource will be created.'
required: true
image-builder-template-name:
description: 'The name of the image builder template resource to be used for creating and running the Image builder service.'
build-timeout-in-minutes:
description: 'The value is an integer which is used as timeout in minutes for running the image build.'
default: 240
vm-size:
description: 'You can override the VM size, from the default value i.e. Standard_D1_v2.'
default: 'Standard_D1_v2'
managed-identity:
description: 'The identity that will be used to do the role assignment and resource creation'
#source inputs
source-image-type:
description: '[ PlatformImage | SharedImageGallery | ManagedImage ]'
default: 'PlatformImage'
source-os-type:
description: 'OS types supported: [ linux | windows ].'
required: true
source-image:
description: 'Value of source-image supported by Azure Image Builder.'
#customization inouts
customizer-source:
description: 'This takes the path to a directory or a file in the runner. By default, it points to the default download directory of the github runner.'
customizer-script:
description: 'The customer can enter multi inline powershell or shell commands and use variables to point to directories inside the downloaded location.'
customizer-windows-update:
description: 'The value is boolean and set to false by default. This value is for Windows images only, the image builder will run Windows Update at the end of the customizations and also handle the reboots it requires.'
default: false
#distribution inputs
dist-type:
description: 'ManagedImage | SharedImageGallery | VHD'
default: 'ManagedImage'
dist-resource-id:
description: 'Image Resource Id to be created by AIB'
dist-location:
description: 'location of Image created by AIB'
run-output-name:
description: 'Every Image builder run is identified with a unique run id.'
dist-image-tags:
description: 'The values set will be used to set the user defined tags on the custom image artifact created.'
outputs:
imagebuilder-run-status:
description: 'This value of this output will be the value of Image builder Run status set to either Succeeded or Failed based on the runState returned by Azure Image Builder.'
run-output-name:
description: 'The action emits output value run-output-name which can be used to get the details of the Image Builder Run.'
custom-image-uri:
description: 'Upon successful completion, The github action emits the URI or resource id of the Image distributed.'
runs:
using: 'node12'
main: 'lib/index.js'
Action ID: marketplace/dflook/tofu-plan
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-plan
Create an OpenTofu plan
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu root module to generate a plan for. Default: . |
workspace |
Optional | OpenTofu workspace to run the plan for. Default: default |
label |
Optional | A friendly name for the environment the OpenTofu configuration is for. This will be used in the PR comment for easy identification. If this is set, it must be the same as the `label` used in any corresponding [`dflook/tofu-apply`](https://github.com/dflook/terraform-github-actions/tree/main/tofu-apply) action. |
variables |
Optional | Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
replace |
Optional | List of resources to replace, one per line. |
target |
Optional | List of resources to target, one per line. The plan will be limited to these resources and their dependencies. |
exclude |
Optional | List of resources to exclude from operations, one per line. The plan will include all resources except the specified ones and their dependencies. Requires OpenTofu 1.9+. |
destroy |
Optional | Set to `true` to generate a plan to destroy all resources.
This generates a plan in [destroy mode](https://opentofu.org/docs/cli/commands/plan/#planning-modes).
Default: false |
refresh |
Optional | Set to `false` to skip synchronisation of the OpenTofu state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
Default: true |
add_github_comment |
Optional | Controls whether a comment is added to the PR with the generated plan.
The default is `true`, which adds a comment to the PR with the results of the plan.
Set to `changes-only` to add a comment only when the plan indicates there are changes to apply.
Set to `always-new` to always create a new comment for each plan, instead of updating the previous comment.
Set to `false` to disable the comment - the plan will still appear in the workflow log.
Default: true |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
changes |
Set to 'true' if the plan would apply any changes, 'false' if it wouldn't. |
plan_path |
This is the path to the generated plan in an opaque binary format. The path is relative to the Actions workspace. The plan can be used as the `plan_file` input to the [dflook/tofu-apply](https://github.com/dflook/terraform-github-actions/tree/main/tofu-apply) action. OpenTofu plans often contain sensitive information, so this output should be treated with care. |
json_plan_path |
This is the path to the generated plan in [JSON Output Format](https://opentofu.org/docs/internals/json-format/). The path is relative to the Actions workspace. OpenTofu plans often contain sensitive information, so this output should be treated with care. |
text_plan_path |
This is the path to the generated plan in a human-readable format. The path is relative to the Actions workspace. |
to_add |
The number of resources that would be affected by this operation. |
to_change |
The number of resources that would be affected by this operation. |
to_destroy |
The number of resources that would be affected by this operation. |
to_move |
The number of resources that would be affected by this operation. |
to_import |
The number of resources that would be affected by this operation. |
run_id |
If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id. |
name: tofu-plan
description: Create an OpenTofu plan
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu root module to generate a plan for.
required: false
default: "."
workspace:
description: OpenTofu workspace to run the plan for.
required: false
default: "default"
label:
description: |
A friendly name for the environment the OpenTofu configuration is for.
This will be used in the PR comment for easy identification.
If this is set, it must be the same as the `label` used in any corresponding [`dflook/tofu-apply`](https://github.com/dflook/terraform-github-actions/tree/main/tofu-apply) action.
required: false
default: ""
variables:
description: |
Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
replace:
description: List of resources to replace, one per line.
required: false
default: ""
target:
description: |
List of resources to target, one per line.
The plan will be limited to these resources and their dependencies.
required: false
default: ""
exclude:
description: |
List of resources to exclude from operations, one per line.
The plan will include all resources except the specified ones and their dependencies.
Requires OpenTofu 1.9+.
required: false
default: ""
destroy:
description: |
Set to `true` to generate a plan to destroy all resources.
This generates a plan in [destroy mode](https://opentofu.org/docs/cli/commands/plan/#planning-modes).
required: false
default: "false"
refresh:
description: |
Set to `false` to skip synchronisation of the OpenTofu state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
required: false
default: "true"
add_github_comment:
description: |
Controls whether a comment is added to the PR with the generated plan.
The default is `true`, which adds a comment to the PR with the results of the plan.
Set to `changes-only` to add a comment only when the plan indicates there are changes to apply.
Set to `always-new` to always create a new comment for each plan, instead of updating the previous comment.
Set to `false` to disable the comment - the plan will still appear in the workflow log.
required: false
default: "true"
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
changes:
description: Set to 'true' if the plan would apply any changes, 'false' if it wouldn't.
plan_path:
description: |
This is the path to the generated plan in an opaque binary format.
The path is relative to the Actions workspace.
The plan can be used as the `plan_file` input to the [dflook/tofu-apply](https://github.com/dflook/terraform-github-actions/tree/main/tofu-apply) action.
OpenTofu plans often contain sensitive information, so this output should be treated with care.
json_plan_path:
description: |
This is the path to the generated plan in [JSON Output Format](https://opentofu.org/docs/internals/json-format/).
The path is relative to the Actions workspace.
OpenTofu plans often contain sensitive information, so this output should be treated with care.
text_plan_path:
description: |
This is the path to the generated plan in a human-readable format.
The path is relative to the Actions workspace.
to_add:
description: The number of resources that would be affected by this operation.
to_change:
description: The number of resources that would be affected by this operation.
to_destroy:
description: The number of resources that would be affected by this operation.
to_move:
description: The number of resources that would be affected by this operation.
to_import:
description: The number of resources that would be affected by this operation.
run_id:
description: If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/plan.sh
branding:
icon: globe
color: purple
Action ID: marketplace/azure/data-factory-deploy-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/data-factory-deploy-action
Deploy Azure Data Factory resources
| Name | Required | Description |
|---|---|---|
resourceGroupName |
Required | Data Factory resource group name |
dataFactoryName |
Required | Data factory name |
armTemplateFile |
Optional | ARM template file name Default: ARMTemplateForFactory.json |
armTemplateParametersFile |
Optional | ARM template parameters file name Default: ARMTemplateParametersForFactory.json |
additionalParameters |
Optional | Parameters which will be replaced in the ARM template |
skipAzModuleInstallation |
Optional | Parameters which skip the Az module installation Default: false |
name: data-factory-deploy
description: Deploy Azure Data Factory resources
inputs:
resourceGroupName:
description: 'Data Factory resource group name'
required: true
dataFactoryName:
description: 'Data factory name'
required: true
armTemplateFile:
description: 'ARM template file name'
required: false
default: 'ARMTemplateForFactory.json'
armTemplateParametersFile:
description: 'ARM template parameters file name'
required: false
default: 'ARMTemplateParametersForFactory.json'
additionalParameters:
description: 'Parameters which will be replaced in the ARM template'
required: false
default: ''
skipAzModuleInstallation:
description: 'Parameters which skip the Az module installation'
required: false
default: 'false'
runs:
using: 'composite'
steps:
- name: Install Az PowerShell module
run: if('${{ inputs.skipAzModuleInstallation }}' -ne 'true') { Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force }
shell: pwsh
- name: Run Pre-deployment script
run: |
${{ github.action_path }}/PrePostDeploymentScript.ps1 `
-armTemplate "${{ inputs.armTemplateFile }}" `
-ResourceGroupName "${{ inputs.resourceGroupName }}" `
-DataFactoryName "${{ inputs.dataFactoryName }}" `
-predeployment $true `
-deleteDeployment $false
shell: pwsh
- name: Run ARM deploy
uses: azure/arm-deploy@v1
with:
resourceGroupName: ${{ inputs.resourceGroupName }}
template: ${{ inputs.armTemplateFile }}
parameters: ${{ inputs.armTemplateParametersFile }} factoryName=${{ inputs.dataFactoryName }} ${{ inputs.additionalParameters }}
- name: Run Post-deployment script
run: |
${{ github.action_path }}/PrePostDeploymentScript.ps1 `
-armTemplate "${{ inputs.armTemplateFile }}" `
-ResourceGroupName "${{ inputs.resourceGroupName }}" `
-DataFactoryName '${{ inputs.dataFactoryName }}' `
-predeployment $false `
-deleteDeployment $true
shell: pwsh
Action ID: marketplace/actions/jekyll-build-pages
Author: GitHub
Publisher: actions
Repository: github.com/actions/jekyll-build-pages
A simple GitHub Action for producing Jekyll build artifacts compatible with GitHub Pages
| Name | Required | Description |
|---|---|---|
source |
Optional | Directory where the source files reside. Default: ./ |
destination |
Optional | Output directory of the build. Although it can be nested inside the source, it cannot be the same as the source directory. Default: ./_site |
future |
Optional | Publishes posts with a future date. When set to true, the build is made with the --future option which overrides the future option that may be set in a Jekyll configuration file. |
build_revision |
Optional | The SHA-1 of the git commit for which the build is running. Default to GITHUB_SHA. Default: ${{ github.sha }} |
verbose |
Optional | Verbose output Default: True |
token |
Required | GitHub token Default: ${{ github.token }} |
name: 'Build Jekyll for GitHub Pages'
description: 'A simple GitHub Action for producing Jekyll build artifacts compatible with GitHub Pages'
author: 'GitHub'
inputs:
source:
description: 'Directory where the source files reside.'
required: false
default: ./
destination:
description: 'Output directory of the build. Although it can be nested inside the source, it cannot be the same as the source directory.'
required: false
default: ./_site
future:
description: 'Publishes posts with a future date. When set to true, the build is made with the --future option which overrides the future option that may be set in a Jekyll configuration file.'
required: false
default: false
build_revision:
description: 'The SHA-1 of the git commit for which the build is running. Default to GITHUB_SHA.'
required: false
default: ${{ github.sha }}
verbose:
description: 'Verbose output'
required: false
default: true
token:
description: 'GitHub token'
required: true
default: ${{ github.token }}
runs:
using: 'docker'
image: 'docker://ghcr.io/actions/jekyll-build-pages:v1.0.13'
Action ID: marketplace/amirisback/desktop-news-app
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/desktop-news-app
Desktop News App
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Desktop News App'
description: 'Desktop News App'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/peter-evans/create-or-update-comment
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/create-or-update-comment
Create or update an issue or pull request comment
| Name | Required | Description |
|---|---|---|
token |
Optional | GITHUB_TOKEN or a repo scoped PAT. Default: ${{ github.token }} |
repository |
Optional | The full name of the repository in which to create or update a comment. Default: ${{ github.repository }} |
issue-number |
Optional | The number of the issue or pull request in which to create a comment. |
comment-id |
Optional | The id of the comment to update. |
body |
Optional | The comment body. Cannot be used in conjunction with `body-path`. |
body-path |
Optional | The path to a file containing the comment body. Cannot be used in conjunction with `body`. |
body-file |
Optional | Deprecated in favour of `body-path`. |
edit-mode |
Optional | The mode when updating a comment, "replace" or "append". Default: append |
append-separator |
Optional | The separator to use when appending to an existing comment. (`newline`, `space`, `none`) Default: newline |
reactions |
Optional | A comma or newline separated list of reactions to add to the comment. |
reactions-edit-mode |
Optional | The mode when updating comment reactions, "replace" or "append". Default: append |
| Name | Description |
|---|---|
comment-id |
The id of the created comment |
name: 'Create or Update Comment'
description: 'Create or update an issue or pull request comment'
inputs:
token:
description: 'GITHUB_TOKEN or a repo scoped PAT.'
default: ${{ github.token }}
repository:
description: 'The full name of the repository in which to create or update a comment.'
default: ${{ github.repository }}
issue-number:
description: 'The number of the issue or pull request in which to create a comment.'
comment-id:
description: 'The id of the comment to update.'
body:
description: 'The comment body. Cannot be used in conjunction with `body-path`.'
body-path:
description: 'The path to a file containing the comment body. Cannot be used in conjunction with `body`.'
body-file:
description: 'Deprecated in favour of `body-path`.'
edit-mode:
description: 'The mode when updating a comment, "replace" or "append".'
default: 'append'
append-separator:
description: 'The separator to use when appending to an existing comment. (`newline`, `space`, `none`)'
default: 'newline'
reactions:
description: 'A comma or newline separated list of reactions to add to the comment.'
reactions-edit-mode:
description: 'The mode when updating comment reactions, "replace" or "append".'
default: 'append'
outputs:
comment-id:
description: 'The id of the created comment'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'message-square'
color: 'gray-dark'
Action ID: marketplace/lukka/install-llvm-action
Author: Unknown
Publisher: lukka
Repository: github.com/lukka/install-llvm-action
Downloads and installs LLVM and Clang binaries.
| Name | Required | Description |
|---|---|---|
version |
Required | The version of LLVM and Clang binaries to install. |
force-version |
Optional | Whether to accept unsupported LLVM and Clang versions. |
ubuntu-version |
Optional | The version override of Ubuntu to use for the Linux platform. |
directory |
Optional | The directory to install LLVM and Clang binaries to. |
cached |
Optional | Whether the LLVM and Clang binaries were cached. |
download-url |
Optional | The URL to download LLVM and Clang binaries from. |
auth |
Optional | The Authorization header to use when downloading LLVM and Clang binaries. |
env |
Optional | Whether to set CC and CXX environment variables to Clang paths. |
| Name | Description |
|---|---|
version |
The full version of LLVM and Clang binaries installed. |
name: "Install LLVM and Clang"
description: "Downloads and installs LLVM and Clang binaries."
branding:
icon: "arrow-down-circle"
color: "black"
inputs:
version:
description: "The version of LLVM and Clang binaries to install."
required: true
force-version:
description: "Whether to accept unsupported LLVM and Clang versions."
required: false
ubuntu-version:
description: "The version override of Ubuntu to use for the Linux platform."
required: false
directory:
description: "The directory to install LLVM and Clang binaries to."
required: false
cached:
description: "Whether the LLVM and Clang binaries were cached."
required: false
download-url:
description: "The URL to download LLVM and Clang binaries from."
required: false
auth:
description: "The Authorization header to use when downloading LLVM and Clang binaries."
required: false
env:
description: "Whether to set CC and CXX environment variables to Clang paths."
required: false
outputs:
version:
description: "The full version of LLVM and Clang binaries installed."
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/amirisback/frogo-android-ui-kit
Author: Muhammad Faisal Amir - Frogobox
Publisher: amirisback
Repository: github.com/amirisback/frogo-android-ui-kit
Library Easy UI Kit Based on Design Guideline, Full and Clear Documentation, :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Android-UI-Kit'
description: 'Library Easy UI Kit Based on Design Guideline, Full and Clear Documentation, :books:'
author: 'Muhammad Faisal Amir - Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/pgrimaud/find-and-replace-pull-request-body
Author: Ivan Gabriele
Publisher: pgrimaud
Repository: github.com/pgrimaud/find-and-replace-pull-request-body
Replace workflow related Pull Request body.
| Name | Required | Description |
|---|---|---|
githubToken |
Required | Github Personal Access Token |
prNumber |
Optional | Pull request number (if outside the pull_request context) |
body |
Optional | New Pull Request body |
find |
Optional | String to find in Pull Request body |
isHtmlCommentTag |
Optional | Treat `find` input as an HTML Comment Tag |
replace |
Optional | Replacement string in Pull Request body |
name: Find And Replace Pull Request Body
description: Replace workflow related Pull Request body.
author: Ivan Gabriele
branding:
color: yellow
icon: git-pull-request
inputs:
githubToken:
description: 'Github Personal Access Token'
required: true
prNumber:
description: 'Pull request number (if outside the pull_request context)'
body:
description: 'New Pull Request body'
find:
description: 'String to find in Pull Request body'
isHtmlCommentTag:
description: 'Treat `find` input as an HTML Comment Tag'
replace:
description: 'Replacement string in Pull Request body'
runs:
using: node20
main: ./index.dist.js
Action ID: marketplace/Harmon758/postgresql-action
Author: Harmon
Publisher: Harmon758
Repository: github.com/Harmon758/postgresql-action
Setup a PostgreSQL database
| Name | Required | Description |
|---|---|---|
postgresql version |
Optional | Version of PostgreSQL to use Default: latest |
postgresql db |
Optional | POSTGRES_DB - name for the default database that is created |
postgresql user |
Optional | POSTGRES_USER - create the specified user with superuser power |
postgresql password |
Optional | POSTGRES_PASSWORD - superuser password |
name: 'Setup PostgreSQL'
description: 'Setup a PostgreSQL database'
author: 'Harmon'
branding:
icon: 'database'
color: 'blue'
inputs:
# See https://hub.docker.com/_/postgres for supported versions
# and further details on input environment variables
postgresql version:
description: 'Version of PostgreSQL to use'
required: false
default: 'latest'
postgresql db:
description: 'POSTGRES_DB - name for the default database that is created'
required: false
default: ''
postgresql user:
description: 'POSTGRES_USER - create the specified user with superuser power'
required: false
default: ''
postgresql password:
description: 'POSTGRES_PASSWORD - superuser password'
required: false
default: ''
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/dflook/terraform-check
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-check
Check if there are Terraform changes to apply
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the Terraform root module to check Default: . |
workspace |
Optional | Terraform workspace to run the plan in Default: default |
variables |
Optional | Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure` because the there are outstanding changes to apply, this will be set to 'changes-to-apply'. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a step when there are changes to apply. |
name: terraform-check
description: Check if there are Terraform changes to apply
author: Daniel Flook
inputs:
path:
description: Path to the Terraform root module to check
required: false
default: "."
workspace:
description: Terraform workspace to run the plan in
required: false
default: "default"
variables:
description: |
Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure` because the there are outstanding changes to apply, this will be set to 'changes-to-apply'.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a step when there are changes to apply.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/check.sh
branding:
icon: globe
color: purple
Action ID: marketplace/peter-evans/rebase
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/rebase
Rebase pull requests in a repository
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub auth token Default: ${{ github.token }} |
repository |
Optional | The target GitHub repository Default: ${{ github.repository }} |
head |
Optional | Filter pull requests by head user or head organization and branch name in the format user:ref-name or organization:ref-name. For example: github:new-script-format or octocat:test-branch. |
base |
Optional | Filter pull requests by base branch name. Example: gh-pages. |
include-labels |
Optional | A comma or newline separated list of pull request labels to include. Allows any labels if unspecified. |
exclude-labels |
Optional | A comma or newline separated list of pull request labels to exclude |
exclude-drafts |
Optional | Exclude draft pull requests |
rebase-options |
Optional | A comma or newline separated list of options to pass to the git rebase command. For example, `-Xtheirs`. |
name: 'Rebase Pulls'
description: 'Rebase pull requests in a repository'
inputs:
token:
description: 'GitHub auth token'
default: ${{ github.token }}
repository:
description: 'The target GitHub repository'
default: ${{ github.repository }}
head:
description: >
Filter pull requests by head user or head organization and branch name in the format user:ref-name or organization:ref-name.
For example: github:new-script-format or octocat:test-branch.
base:
description: >
Filter pull requests by base branch name.
Example: gh-pages.
include-labels:
description: >
A comma or newline separated list of pull request labels to include.
Allows any labels if unspecified.
exclude-labels:
description: 'A comma or newline separated list of pull request labels to exclude'
exclude-drafts:
description: 'Exclude draft pull requests'
default: false
rebase-options:
description: 'A comma or newline separated list of options to pass to the git rebase command. For example, `-Xtheirs`.'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'git-pull-request'
color: 'gray-dark'
Action ID: marketplace/andstor/clover2lcov-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/clover2lcov-action
Convert Clover files to LCOV reports
| Name | Required | Description |
|---|---|---|
src |
Required | File path to clover report |
dst |
Optional | Destination path to final lcov report |
| Name | Description |
|---|---|
file |
The file path for the resulting lcov report file |
name: 'clover2lcov'
description: 'Convert Clover files to LCOV reports'
author: 'André Storhaug'
branding:
icon: 'file-text'
color: 'purple'
inputs:
src:
description: 'File path to clover report'
required: true
dst:
description: 'Destination path to final lcov report'
required: false
outputs:
file:
description: 'The file path for the resulting lcov report file'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/actions/dependency-review-action
Author: GitHub
Publisher: actions
Repository: github.com/actions/dependency-review-action
Prevent the introduction of dependencies with known vulnerabilities
| Name | Required | Description |
|---|---|---|
repo-token |
Optional | Token for the repository. Can be passed in using `{{ secrets.GITHUB_TOKEN }}`. Default: ${{ github.token }} |
fail-on-severity |
Optional | Don't block PRs below this severity. Possible values are `low`, `moderate`, `high`, `critical`. |
fail-on-scopes |
Optional | Dependency scopes to block PRs on. Comma-separated list. Possible values are 'unknown', 'runtime', and 'development' (e.g. "runtime, development") |
base-ref |
Optional | The base git ref to be used for this check. Has a default value when the workflow event is `pull_request` or `pull_request_target`. Must be provided otherwise. |
head-ref |
Optional | The head git ref to be used for this check. Has a default value when the workflow event is `pull_request` or `pull_request_target`. Must be provided otherwise. |
config-file |
Optional | A path to the configuration file for the action. |
allow-licenses |
Optional | Comma-separated list of allowed licenses (e.g. "MIT, GPL 3.0, BSD 2 Clause") |
deny-licenses |
Optional | Comma-separated list of forbidden licenses (e.g. "MIT, GPL 3.0, BSD 2 Clause") |
allow-dependencies-licenses |
Optional | Comma-separated list of dependencies in purl format (e.g. "pkg:npm/express, pkg:pypi/pycrypto"). These dependencies will be permitted to use any license, no matter what license policy is enforced otherwise. |
allow-ghsas |
Optional | Comma-separated list of allowed GitHub Advisory IDs (e.g. "GHSA-abcd-1234-5679, GHSA-efgh-1234-5679") |
external-repo-token |
Optional | A token for fetching external configuration file if it lives in another repository. It is required if the repository is private |
license-check |
Optional | A boolean to determine if license checks should be performed |
vulnerability-check |
Optional | A boolean to determine if vulnerability checks should be performed |
comment-summary-in-pr |
Optional | Determines if the summary is posted as a comment in the PR itself. Setting this to `always` or `on-failure` requires you to give the workflow `pull-requests: write` permissions |
deny-packages |
Optional | A comma-separated list of package URLs to deny (e.g. "pkg:npm/express, pkg:pypi/pycrypto"). If version specified, only deny matching packages and version; else, deny all regardless of version. |
deny-groups |
Optional | A comma-separated list of package URLs for group(s)/namespace(s) to deny (e.g. "pkg:npm/express/, pkg:pypi/pycrypto/"). Please note that the group name must be followed by a `/`. |
retry-on-snapshot-warnings |
Optional | Whether to retry on snapshot warnings |
retry-on-snapshot-warnings-timeout |
Optional | Number of seconds to wait before stopping snapshot retries. |
warn-only |
Optional | When set to `true` this action will always complete with success, overriding the `fail-on-severity` parameter. |
show-openssf-scorecard |
Optional | Show a summary of the OpenSSF Scorecard scores. |
warn-on-openssf-scorecard-level |
Optional | Numeric threshold for the OpenSSF Scorecard score. If the score is below this threshold, the action will warn you. |
| Name | Description |
|---|---|
comment-content |
Prepared dependency report comment |
dependency-changes |
All dependency changes (JSON) |
vulnerable-changes |
Vulnerable dependency changes (JSON) |
invalid-license-changes |
Invalid license dependency changes (JSON) |
denied-changes |
Denied dependency changes (JSON) |
# IMPORTANT
#
# Avoid setting default values for configuration options in
# this file, they will overwrite external configurations.
#
# If you are trying to find out the default value for a config
# option please take a look at the README or src/schemas.ts.
#
# If you are adding an option, make sure the Zod definition
# contains a default value.
name: 'Dependency Review'
description: 'Prevent the introduction of dependencies with known vulnerabilities'
author: 'GitHub'
inputs:
repo-token:
description: Token for the repository. Can be passed in using `{{ secrets.GITHUB_TOKEN }}`.
required: false
default: ${{ github.token }}
fail-on-severity:
description: Don't block PRs below this severity. Possible values are `low`, `moderate`, `high`, `critical`.
required: false
fail-on-scopes:
description: Dependency scopes to block PRs on. Comma-separated list. Possible values are 'unknown', 'runtime', and 'development' (e.g. "runtime, development")
required: false
base-ref:
description: The base git ref to be used for this check. Has a default value when the workflow event is `pull_request` or `pull_request_target`. Must be provided otherwise.
required: false
head-ref:
description: The head git ref to be used for this check. Has a default value when the workflow event is `pull_request` or `pull_request_target`. Must be provided otherwise.
required: false
config-file:
description: A path to the configuration file for the action.
required: false
allow-licenses:
description: Comma-separated list of allowed licenses (e.g. "MIT, GPL 3.0, BSD 2 Clause")
required: false
deny-licenses:
description: Comma-separated list of forbidden licenses (e.g. "MIT, GPL 3.0, BSD 2 Clause")
required: false
allow-dependencies-licenses:
description: Comma-separated list of dependencies in purl format (e.g. "pkg:npm/express, pkg:pypi/pycrypto"). These dependencies will be permitted to use any license, no matter what license policy is enforced otherwise.
required: false
allow-ghsas:
description: Comma-separated list of allowed GitHub Advisory IDs (e.g. "GHSA-abcd-1234-5679, GHSA-efgh-1234-5679")
required: false
external-repo-token:
description: A token for fetching external configuration file if it lives in another repository. It is required if the repository is private
required: false
license-check:
description: A boolean to determine if license checks should be performed
required: false
vulnerability-check:
description: A boolean to determine if vulnerability checks should be performed
required: false
comment-summary-in-pr:
description: "Determines if the summary is posted as a comment in the PR itself. Setting this to `always` or `on-failure` requires you to give the workflow `pull-requests: write` permissions"
required: false
deny-packages:
description: A comma-separated list of package URLs to deny (e.g. "pkg:npm/express, pkg:pypi/pycrypto"). If version specified, only deny matching packages and version; else, deny all regardless of version.
required: false
deny-groups:
description: A comma-separated list of package URLs for group(s)/namespace(s) to deny (e.g. "pkg:npm/express/, pkg:pypi/pycrypto/"). Please note that the group name must be followed by a `/`.
required: false
retry-on-snapshot-warnings:
description: Whether to retry on snapshot warnings
required: false
retry-on-snapshot-warnings-timeout:
description: Number of seconds to wait before stopping snapshot retries.
required: false
warn-only:
description: When set to `true` this action will always complete with success, overriding the `fail-on-severity` parameter.
required: false
show-openssf-scorecard:
description: Show a summary of the OpenSSF Scorecard scores.
required: false
warn-on-openssf-scorecard-level:
description: Numeric threshold for the OpenSSF Scorecard score. If the score is below this threshold, the action will warn you.
required: false
outputs:
comment-content:
description: Prepared dependency report comment
dependency-changes:
description: All dependency changes (JSON)
vulnerable-changes:
description: Vulnerable dependency changes (JSON)
invalid-license-changes:
description: Invalid license dependency changes (JSON)
denied-changes:
description: Denied dependency changes (JSON)
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/codecov/codecov-ats-docker
Author: Unknown
Publisher: codecov
Repository: github.com/codecov/codecov-ats-docker
RUN ATS
| Name | Required | Description |
|---|---|---|
container |
Required | container to run in |
static_token |
Required | Codecov static token |
codecov_token |
Required | Codecov upload token |
output_path |
Optional | output path for test file in container, must be a fully qualified path Default: /tmp |
local_output_path |
Optional | where to copy the files out of the container Default: . |
install_cli |
Optional | Whether to install the Codecov CLI. Set to false if you have previously installed the CLI. Default: true |
main_branch |
Optional | The main branch to compare against Default: origin/main |
run_tests |
Optional | Whether to run the tests with the given command in the container. If set to true, this will run the test command and upload to Codecov Default: false |
test_command |
Optional | Command to run in container to execute tests. Required when run_tests is true. This should be `pytest` with the desired args at this point. |
codecov_url |
Optional | URL of Codecov Default: https://api.codecov.io |
codecov_cli_version |
Optional | Version of CLI to use Default: 0.3.8 |
codecov_cli_upload_args |
Optional | List of args to pass to cli on upload. This is kind of a hack pending a more robust solution |
codecov_cli_yml_path |
Optional | Path to codecov cli yml. Currently expected to include flag as well --codecov-yml-path=codecov_cli.yml for ex. |
| Name | Description |
|---|---|
test_skip_list |
the tests to skip |
runner_options |
the runner options |
test_list |
the tests to run |
name: 'ATS'
description: 'RUN ATS'
inputs:
container:
description: 'container to run in'
required: true
static_token:
description: 'Codecov static token'
required: true
codecov_token:
description: 'Codecov upload token'
required: true
output_path:
description: 'output path for test file in container, must be a fully qualified path'
default: '/tmp'
required: false
local_output_path:
description: 'where to copy the files out of the container'
default: '.'
required: false
install_cli:
required: false
default: 'true'
description: 'Whether to install the Codecov CLI. Set to false if you have previously installed the CLI.'
main_branch:
required: false
default: 'origin/main'
description: 'The main branch to compare against'
run_tests:
required: false
default: 'false'
description: 'Whether to run the tests with the given command in the container. If set to true, this will run the test command and upload to Codecov'
test_command:
required: false
description: 'Command to run in container to execute tests. Required when run_tests is true. This should be `pytest` with the desired args at this point.'
codecov_url:
required: false
description: 'URL of Codecov'
default: 'https://api.codecov.io'
codecov_cli_version:
required: false
description: 'Version of CLI to use'
default: '0.3.8'
codecov_cli_upload_args:
required: false
default: ''
description: 'List of args to pass to cli on upload. This is kind of a hack pending a more robust solution'
codecov_cli_yml_path:
required: false
default: ''
description: 'Path to codecov cli yml. Currently expected to include flag as well --codecov-yml-path=codecov_cli.yml for ex.'
outputs:
test_skip_list:
description: "the tests to skip"
value: ${{ steps.outputs.outputs.test_skip_list }}
runner_options:
description: "the runner options"
value: ${{ steps.outputs.outputs.runner_options }}
test_list:
description: "the tests to run"
value: ${{ steps.outputs.outputs.test_list }}
runs:
using: "composite"
steps:
- id: container-running
name: Ensure container is running
run: docker ps | grep ${{ inputs.container }}
shell: bash
- run: echo command="docker exec ${{ inputs.container }} sh -c" >> $GITHUB_OUTPUT
name: Set container exec command
id: command
shell: bash
- run: ${{ steps.command.outputs.command }} "pip install codecov-cli==${{ inputs.codecov_cli_version }}"
if: inputs.install_cli == 'true'
name: Install Codecov CLI
shell: bash
- run: ${{ steps.command.outputs.command }} "codecovcli -u ${{ inputs.codecov_url }} create-commit -t ${{ inputs.codecov_token }} --fail-on-error"
name: Create Codecov Commit
shell: bash
- run: ${{ steps.command.outputs.command }} "codecovcli -u ${{ inputs.codecov_url }} create-report -t ${{ inputs.codecov_token }} --fail-on-error"
name: Create Codecov Report
shell: bash
- run: ${{ steps.command.outputs.command }} "codecovcli -u ${{ inputs.codecov_url }} static-analysis --token=${{inputs.static_token}}"
name: Run Static Analysis
shell: bash
- run: ${{ steps.command.outputs.command }} "codecovcli -u ${{ inputs.codecov_url }} label-analysis --base-sha=$(git merge-base HEAD^ ${{ inputs.main_branch }}) --token=${{inputs.static_token}} --dry-run --dry-run-output-path=${{ inputs.output_path }}/codecov_analysis > /dev/null"
name: Run Label Analysis
shell: bash
- run: |
${{ steps.command.outputs.command }} "jq -r '.ats_tests_to_run []' ${{ inputs.output_path }}/codecov_analysis.json | sed s/\\\"//g > ${{ inputs.output_path }}/test_list"
docker cp ${{ inputs.container }}:${{ inputs.output_path }}/test_list ${{ inputs.local_output_path }}/test_list
name: Parse tests to run
shell: bash
- run: |
${{ steps.command.outputs.command }} "jq -r '.ats_tests_to_skip []' ${{ inputs.output_path }}/codecov_analysis.json | sed s/\\\"//g > ${{ inputs.output_path }}/test_skip_list"
docker cp ${{ inputs.container }}:${{ inputs.output_path }}/test_skip_list ${{ inputs.local_output_path }}/test_skip_list
name: Parse tests to skip
shell: bash
- run: |
${{ steps.command.outputs.command }} "jq -r '.runner_options | join(\" \")' ${{ inputs.output_path }}/codecov_analysis.json | sed s/\\\"//g | tr -d '\n'> ${{ inputs.output_path }}/runner_options"
docker cp ${{ inputs.container }}:${{ inputs.output_path }}/runner_options ${{ inputs.local_output_path }}/runner_options
name: Parse runner options
shell: bash
- run: |
echo test_list=$(cat ${{ inputs.local_output_path }}/test_list) >> $GITHUB_OUTPUT
echo test_skip_list=$(cat ${{ inputs.local_output_path }}/test_skip_list) >> $GITHUB_OUTPUT
echo runner_options=$(cat ${{ inputs.local_output_path }}/runner_options) >> $GITHUB_OUTPUT
shell: bash
name: Setup outputs
id: outputs
- run: ${{ steps.command.outputs.command }} "${{ inputs.test_command }} ${{ steps.outputs.outputs.runner_options }} `cat ${{ inputs.local_output_path }}/test_list | tr '\n' ' '`"
name: Run tests
if: inputs.run_tests == 'true'
shell: bash
- run: ${{ steps.command.outputs.command }} "codecovcli ${{ inputs.codecov_cli_yml_path }} -u ${{ inputs.codecov_url }} do-upload -t ${{ inputs.codecov_token }} ${{ inputs.codecov_cli_upload_args }} --fail-on-error"
name: Upload to Codecov
if: inputs.run_tests == 'true'
shell: bash
Action ID: marketplace/JamesIves/fetch-api-data-action
Author: James Ives <iam@jamesiv.es>
Publisher: JamesIves
Repository: github.com/JamesIves/fetch-api-data-action
This action will handle authenticated API requests, allowing you to save the data from the request into your workspace.
| Name | Required | Description |
|---|---|---|
endpoint |
Required | The URL of the endpoint you would like to retrieve data from. |
configuration |
Optional | Any applicable configuration settings that should be set such as authentication tokens. You can reference secrets using the secrets syntax, or you can reference data returned from the `TOKEN_ENDPOINT` request using the triple bracket syntax. |
token-endpoint |
Optional | If the `ENDPOINT` API requires you to make a request to get an access token prior to fetching data you can perform this task by specifying a token endpoint. Any data returned from the token end can be referenced in the `CONFIGURATION` variable using the triple bracket syntax. |
token-configuration |
Optional | Any applicable configuration settings that should be set such as authentication tokens. You can reference secrets using the secrets syntax. |
retry |
Optional | If you are working with an intermittent API you can toggle this option to true. Doing so will make the action try the request 3 times at random invervals before failing. |
save-location |
Optional | By default the save location of the JSON file is `fetch-api-data-action/data.json`, if you would like to override the directory you can do so by specifying a new one with this variable. |
save-name |
Optional | You can override the name of the exported `.json` file by specifying a new one here. You should _not_ include the file extension in your name. |
variable-name |
Optional | You can override the name of the variable name the action exports. This will work so long as set-output is true. |
set-output |
Optional | Determines if the returned data should be saved as an environment variable or not. This field defaults to `true`, but depending on your API response length you may need to toggle this. Default: true |
debug |
Optional | If set to true the action will log the API responses it receives in the terminal. |
format |
Optional | Allows you to modify the format of the saved file, for example you can use txt here to save the file as a txt file. This field defaults to json. |
encoding |
Optional | Allows you to specify the encoding the saved file, can be of type BufferEncoding: "ascii" | "utf8" | "utf-8" | "utf16le" | "ucs2" | "ucs-2" | "base64" | "latin1" | "binary" | "hex". |
| Name | Description |
|---|---|
fetchApiData |
The requested data from the API stored as a string. |
name: 'Fetch API Data'
description: 'This action will handle authenticated API requests, allowing you to save the data from the request into your workspace.'
author: 'James Ives <iam@jamesiv.es>'
runs:
using: 'node20'
main: 'lib/main.js'
branding:
icon: 'truck'
color: 'purple'
inputs:
endpoint:
description: 'The URL of the endpoint you would like to retrieve data from.'
required: true
configuration:
description: 'Any applicable configuration settings that should be set such as authentication tokens. You can reference secrets using the secrets syntax, or you can reference data returned from the `TOKEN_ENDPOINT` request using the triple bracket syntax.'
required: false
token-endpoint:
description: 'If the `ENDPOINT` API requires you to make a request to get an access token prior to fetching data you can perform this task by specifying a token endpoint. Any data returned from the token end can be referenced in the `CONFIGURATION` variable using the triple bracket syntax.'
required: false
token-configuration:
description: 'Any applicable configuration settings that should be set such as authentication tokens. You can reference secrets using the secrets syntax.'
required: false
retry:
description: 'If you are working with an intermittent API you can toggle this option to true. Doing so will make the action try the request 3 times at random invervals before failing.'
required: false
save-location:
description: 'By default the save location of the JSON file is `fetch-api-data-action/data.json`, if you would like to override the directory you can do so by specifying a new one with this variable.'
required: false
save-name:
description: 'You can override the name of the exported `.json` file by specifying a new one here. You should _not_ include the file extension in your name.'
required: false
variable-name:
description: 'You can override the name of the variable name the action exports. This will work so long as set-output is true.'
required: false
set-output:
description: 'Determines if the returned data should be saved as an environment variable or not. This field defaults to `true`, but depending on your API response length you may need to toggle this.'
required: false
default: 'true'
debug:
description: 'If set to true the action will log the API responses it receives in the terminal.'
required: false
format:
description: 'Allows you to modify the format of the saved file, for example you can use txt here to save the file as a txt file. This field defaults to json.'
required: false
encoding:
description: 'Allows you to specify the encoding the saved file, can be of type BufferEncoding: "ascii" | "utf8" | "utf-8" | "utf16le" | "ucs2" | "ucs-2" | "base64" | "latin1" | "binary" | "hex".'
required: false
outputs:
fetchApiData:
description: 'The requested data from the API stored as a string.'
Action ID: marketplace/benchmark-action/github-action-benchmark
Author: github-action-benchmark developers <https://github.com/benchmark-action>
Publisher: benchmark-action
Repository: github.com/benchmark-action/github-action-benchmark
Continuous Benchmark using GitHub pages as dash board for keeping performance
| Name | Required | Description |
|---|---|---|
name |
Required | Name of the benchmark. This value must be identical among all benchmarks Default: Benchmark |
tool |
Required | Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter" |
output-file-path |
Required | A path to file which contains the benchmark output |
gh-pages-branch |
Required | Branch for gh-pages Default: gh-pages |
gh-repository |
Optional | Url to an optional different repository to store benchmark results |
benchmark-data-dir-path |
Required | Path to directory which contains benchmark files on GitHub pages branch Default: dev/bench |
github-token |
Optional | GitHub API token to pull/push GitHub pages branch and deploy GitHub pages. For public repository, this must be personal access token for now. Please read README.md for more details |
ref |
Optional | optional Ref to use when finding commit |
auto-push |
Optional | Push GitHub Pages branch to remote automatically. This option requires github-token input |
skip-fetch-gh-pages |
Optional | Skip pulling GitHub Pages branch before generating an auto commit |
comment-always |
Optional | Leave a comment with benchmark result comparison. To enable this feature, github-token input must be given as well |
summary-always |
Optional | Leave a job summary with benchmark result comparison |
save-data-file |
Optional | Save the benchmark data to external file Default: True |
comment-on-alert |
Optional | Leave an alert comment when current benchmark result is worse than previous. Threshold is specified with alert-threshold input. To enable this feature, github-token input must be given as well |
alert-threshold |
Optional | Threshold which determines if an alert should happen or not. Percentage value such as "150%". For example, 150% means that an alert happens when current benchmark result is 1.5x worse than previous Default: 200% |
fail-on-alert |
Optional | Workflow fails when alert comment happens |
fail-threshold |
Optional | Threshold which determines if the current workflow fails. Format is the same as alert-threshold input. If this value is not specified, the same value as alert-threshold is used |
alert-comment-cc-users |
Optional | Comma separated GitHub user names which start with @ (e.g. "@foo,@bar"). They will be mentioned in commit comment for alert. |
external-data-json-path |
Optional | JSON data file for storing benchmark results. When this input is set, github-action-benchmark no longer uses Git branch to store data. Instead, it reads and appends benchmark data from/to the file. User must store the file anywhere |
max-items-in-chart |
Optional | Max data points in a benchmark chart to avoid making the chart too busy. Value must be unsigned integer. No limit by default |
name: 'Continuous Benchmark'
author: 'github-action-benchmark developers <https://github.com/benchmark-action>'
description: 'Continuous Benchmark using GitHub pages as dash board for keeping performance'
branding:
icon: 'fast-forward'
color: 'blue'
inputs:
name:
description: 'Name of the benchmark. This value must be identical among all benchmarks'
required: true
default: 'Benchmark'
tool:
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter"'
required: true
output-file-path:
description: 'A path to file which contains the benchmark output'
required: true
gh-pages-branch:
description: 'Branch for gh-pages'
required: true
default: 'gh-pages'
gh-repository:
description: 'Url to an optional different repository to store benchmark results'
required: false
benchmark-data-dir-path:
description: 'Path to directory which contains benchmark files on GitHub pages branch'
required: true
default: 'dev/bench'
github-token:
description: 'GitHub API token to pull/push GitHub pages branch and deploy GitHub pages. For public repository, this must be personal access token for now. Please read README.md for more details'
required: false
ref:
description: 'optional Ref to use when finding commit'
required: false
auto-push:
description: 'Push GitHub Pages branch to remote automatically. This option requires github-token input'
required: false
default: false
skip-fetch-gh-pages:
description: 'Skip pulling GitHub Pages branch before generating an auto commit'
required: false
default: false
comment-always:
description: 'Leave a comment with benchmark result comparison. To enable this feature, github-token input must be given as well'
required: false
default: false
summary-always:
description: 'Leave a job summary with benchmark result comparison'
required: false
default: false
save-data-file:
description: 'Save the benchmark data to external file'
required: false
default: true
comment-on-alert:
description: 'Leave an alert comment when current benchmark result is worse than previous. Threshold is specified with alert-threshold input. To enable this feature, github-token input must be given as well'
required: false
default: false
alert-threshold:
description: 'Threshold which determines if an alert should happen or not. Percentage value such as "150%". For example, 150% means that an alert happens when current benchmark result is 1.5x worse than previous'
required: false
default: '200%'
fail-on-alert:
description: 'Workflow fails when alert comment happens'
required: false
# Note: Set to false by default since this action does not push to remote by default. When workflow
# fails and auto-push is not set, there is no chance to push the result to remote.
default: false
fail-threshold:
description: 'Threshold which determines if the current workflow fails. Format is the same as alert-threshold input. If this value is not specified, the same value as alert-threshold is used'
required: false
alert-comment-cc-users:
description: 'Comma separated GitHub user names which start with @ (e.g. "@foo,@bar"). They will be mentioned in commit comment for alert.'
required: false
external-data-json-path:
description: 'JSON data file for storing benchmark results. When this input is set, github-action-benchmark no longer uses Git branch to store data. Instead, it reads and appends benchmark data from/to the file. User must store the file anywhere'
required: false
max-items-in-chart:
description: 'Max data points in a benchmark chart to avoid making the chart too busy. Value must be unsigned integer. No limit by default'
required: false
runs:
using: 'node20'
main: 'dist/src/index.js'
Action ID: marketplace/vsoch/urlchecker-action
Author: Ayoub Malek, Vanessa Sochat
Publisher: vsoch
Repository: github.com/vsoch/urlchecker-action
Automatically check for broken links in a project files. This includes python, markdwon, restructured text files and more.
| Name | Required | Description |
|---|---|---|
git_path |
Optional | A project to clone. If not provided, assumes already cloned in the present working directory. |
subfolder |
Optional | A subfolder to parse instead of the root directory. |
force_pass |
Optional | Force pass of checking, regardless of the result. |
branch |
Optional | If a project (git_path) is defined, use this branch. Defaults to master Default: master |
cleanup |
Optional | Cleanup (delete) repository to check after doing so (appropriate for when clone is done) |
include_files |
Optional | A comma seperated list of paths or patterns to include. |
file_types |
Optional | A comma-separated list of file types to cover in the URL checks Default: .md,.py,.rst,.html |
print_all |
Optional | Choose whether to include file with no URLs in the prints. |
verbose |
Optional | Choose whether to print a more verbose end summary with files and broken URLs. |
serial |
Optional | Run in serial (good for debugging) |
retry_count |
Optional | If a request fails, retry this number of times. Defaults to 1 Default: 1 |
save |
Optional | A csv file path to save the results to. |
timeout |
Optional | The timeout (seconds) to provide to requests to wait for a response. Default: 5 |
exclude_urls |
Optional | A comma seperated links to exclude during URL checks. |
exclude_patterns |
Optional | A comma seperated patterns to exclude during URL checks. |
exclude_files |
Optional | A comma seperated list of paths or patterns to exclude during URL checks. |
workers |
Optional | Number of workers to run in parallel (defaults to 9) |
name: "urlchecker-action"
author: "Ayoub Malek, Vanessa Sochat"
description: "Automatically check for broken links in a project files. This includes python, markdwon, restructured text files and more."
inputs:
git_path:
description: "A project to clone. If not provided, assumes already cloned in the present working directory."
required: false
subfolder:
description: "A subfolder to parse instead of the root directory."
required: false
force_pass:
description: "Force pass of checking, regardless of the result."
required: false
branch:
description: "If a project (git_path) is defined, use this branch. Defaults to master"
required: false
default: master
cleanup:
description: "Cleanup (delete) repository to check after doing so (appropriate for when clone is done)"
required: false
default: false
include_files:
description: "A comma seperated list of paths or patterns to include."
required: false
default: ""
file_types:
description: "A comma-separated list of file types to cover in the URL checks"
required: false
default: ".md,.py,.rst,.html"
print_all:
description: "Choose whether to include file with no URLs in the prints."
required: false
default: false
verbose:
description: "Choose whether to print a more verbose end summary with files and broken URLs."
required: false
default: false
serial:
description: "Run in serial (good for debugging)"
required: false
default: false
retry_count:
description: "If a request fails, retry this number of times. Defaults to 1"
required: false
default: 1
save:
description: "A csv file path to save the results to."
required: false
default: ""
timeout:
description: "The timeout (seconds) to provide to requests to wait for a response."
required: false
default: 5
exclude_urls:
description: "A comma seperated links to exclude during URL checks."
required: false
default: ""
exclude_patterns:
description: "A comma seperated patterns to exclude during URL checks."
required: false
default: ""
exclude_files:
description: "A comma seperated list of paths or patterns to exclude during URL checks."
required: false
default: ""
workers:
description: "Number of workers to run in parallel (defaults to 9)"
required: false
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "link"
color: "blue"
Action ID: marketplace/vsoch/tributors
Author: Vanessa Sochat
Publisher: vsoch
Repository: github.com/vsoch/tributors
Update your contributor metadata files with repository members
| Name | Required | Description |
|---|---|---|
parsers |
Optional | space separated list of parsers to use. Defaults to unset (auto-detect) Default: unset |
update_lookup |
Optional | before running update, update the .tributors metadata file from these resources |
skip_users |
Optional | one or more GitHub logins (space separated) to skip. Default: unset |
zenodo_file |
Optional | .zenodo.json to update. If does not exist, must define doi variable Default: .zenodo.json |
zenodo_doi |
Optional | Zenodo DOI needed for init. Leave unset to skip init. |
log_level |
Optional | Log level to use (default INFO) Default: INFO |
threshold |
Optional | the minimum number of contributions required to add a user Default: 1 |
force |
Optional | if any files exist, overwrite. |
codemeta_file |
Optional | codemeta filename (skipped if not defined) Default: codemeta.json |
mailmap_file |
Optional | A mailmap file for update-lookup (defaults to .mailmap) Default: .mailmap |
allcontrib_file |
Optional | All contributors filename (defaults to .all-contributorsrc) Default: .all-contributorsrc |
allcontrib_type |
Optional | All Contributors contribution type, which defaults to code if not set. Default: code |
allcontrib_skip_generate |
Optional | skip running all-contributors generate |
run_twice |
Optional | run twice to avoid opening two pull requests (defaults to true) Default: True |
name: "tributors-action"
author: "Vanessa Sochat"
description: "Update your contributor metadata files with repository members"
inputs:
parsers:
description: space separated list of parsers to use. Defaults to unset (auto-detect)
default: unset
update_lookup:
description: before running update, update the .tributors metadata file from these resources
required: false
skip_users:
description: one or more GitHub logins (space separated) to skip.
required: false
default: unset
zenodo_file:
description: .zenodo.json to update. If does not exist, must define doi variable
default: .zenodo.json
zenodo_doi:
description: "Zenodo DOI needed for init. Leave unset to skip init."
required: false
log_level:
description: "Log level to use (default INFO)"
default: "INFO"
threshold:
description: "the minimum number of contributions required to add a user"
default: 1
force:
description: "if any files exist, overwrite."
default: false
codemeta_file:
description: codemeta filename (skipped if not defined)
default: codemeta.json
mailmap_file:
description: A mailmap file for update-lookup (defaults to .mailmap)
default: .mailmap
allcontrib_file:
description: All contributors filename (defaults to .all-contributorsrc)
default: .all-contributorsrc
allcontrib_type:
description: "All Contributors contribution type, which defaults to code if not set."
default: code
allcontrib_skip_generate:
description: "skip running all-contributors generate"
default: false
run_twice:
description: "run twice to avoid opening two pull requests (defaults to true)"
default: true
runs:
using: docker
image: Dockerfile
branding:
icon: link
color: blue
Action ID: marketplace/mxschmitt/action-upterm
Author: Owen Ou
Publisher: mxschmitt
Repository: github.com/mxschmitt/action-upterm
This GitHub Action enables direct interaction with the host system running your GitHub Actions via SSH
| Name | Required | Description |
|---|---|---|
limit-access-to-actor |
Optional | If only the public SSH keys of the user triggering the workflow should be authorized Default: false |
limit-access-to-users |
Optional | If only the public SSH keys of the listed GitHub users should be authorized |
upterm-server |
Required | upterm server address (required), supported protocols are ssh, ws, or wss. Default: ssh://uptermd.upterm.dev:22 |
ssh-known-hosts |
Optional | Content for ~/.ssh/known_hosts file on the server |
wait-timeout-minutes |
Optional | Integer number of minutes to wait for user to connect before shutting down server. Once a user connects, the server will stay up. |
name: "Debug with ssh"
description: "This GitHub Action enables direct interaction with the host system running your GitHub Actions via SSH"
author: "Owen Ou"
branding:
icon: "terminal"
color: "purple"
runs:
using: "node20"
main: "lib/index.js"
inputs:
limit-access-to-actor:
description: "If only the public SSH keys of the user triggering the workflow should be authorized"
required: false
default: "false"
limit-access-to-users:
description: "If only the public SSH keys of the listed GitHub users should be authorized"
required: false
default: ""
upterm-server:
description: "upterm server address (required), supported protocols are ssh, ws, or wss."
required: true
default: "ssh://uptermd.upterm.dev:22"
ssh-known-hosts:
description: "Content for ~/.ssh/known_hosts file on the server"
required: false
default: ""
wait-timeout-minutes:
description: "Integer number of minutes to wait for user to connect before shutting down server. Once a user connects, the server will stay up."
required: false
default: ""
Action ID: marketplace/dflook/terraform-remote-state
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-remote-state
Retrieves the root-level outputs from a Terraform remote state.
| Name | Required | Description |
|---|---|---|
backend_type |
Required | The name of the Terraform plugin used for backend state |
workspace |
Optional | Terraform workspace to get the outputs for Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
json_output_path |
This is the path to all the root module outputs in a JSON file. The path is relative to the Actions workspace. For example, with the Terraform config: ```hcl output "service_hostname" { value = "example.com" } ``` The file pointed to by this output will contain: ```json { "service_hostname": "example.com" } ``` Terraform list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object. |
name: terraform-remote-state
description: Retrieves the root-level outputs from a Terraform remote state.
author: Daniel Flook
inputs:
backend_type:
description: The name of the Terraform plugin used for backend state
required: true
workspace:
description: Terraform workspace to get the outputs for
required: false
default: "default"
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
outputs:
json_output_path:
description: |
This is the path to all the root module outputs in a JSON file.
The path is relative to the Actions workspace.
For example, with the Terraform config:
```hcl
output "service_hostname" {
value = "example.com"
}
```
The file pointed to by this output will contain:
```json
{
"service_hostname": "example.com"
}
```
Terraform list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/remote-state.sh
branding:
icon: globe
color: purple
Action ID: marketplace/amirisback/frogo-ui
Author: Muhammad Faisal Amir - Frogobox
Publisher: amirisback
Repository: github.com/amirisback/frogo-ui
Library Easy UI Kit Based on Design Guideline, Full and Clear Documentation, :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-UI'
description: 'Library Easy UI Kit Based on Design Guideline, Full and Clear Documentation, :books:'
author: 'Muhammad Faisal Amir - Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/appleboy/checkout
Author: Unknown
Publisher: appleboy
Repository: github.com/appleboy/checkout
Checkout a Git repository at a particular version
| Name | Required | Description |
|---|---|---|
repository |
Optional | Repository name with owner. For example, actions/checkout Default: ${{ github.repository }} |
ref |
Optional | The branch, tag or SHA to checkout. When checking out the repository that triggered a workflow, this defaults to the reference or SHA for that event. Otherwise, uses the default branch. |
token |
Optional | Personal access token (PAT) used to fetch the repository. The PAT is configured with the local git config, which enables your scripts to run authenticated git commands. The post-job step removes the PAT.
We recommend using a service account with the least permissions necessary. Also when generating a new PAT, select the least scopes necessary.
[Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
Default: ${{ github.token }} |
ssh-key |
Optional | SSH key used to fetch the repository. The SSH key is configured with the local git config, which enables your scripts to run authenticated git commands. The post-job step removes the SSH key. We recommend using a service account with the least permissions necessary. [Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets) |
ssh-known-hosts |
Optional | Known hosts in addition to the user and global host key database. The public SSH keys for a host may be obtained using the utility `ssh-keyscan`. For example, `ssh-keyscan github.com`. The public key for github.com is always implicitly added. |
ssh-strict |
Optional | Whether to perform strict host key checking. When true, adds the options `StrictHostKeyChecking=yes` and `CheckHostIP=no` to the SSH command line. Use the input `ssh-known-hosts` to configure additional hosts.
Default: True |
persist-credentials |
Optional | Whether to configure the token or SSH key with the local git config Default: True |
path |
Optional | Relative path under $GITHUB_WORKSPACE to place the repository |
clean |
Optional | Whether to execute `git clean -ffdx && git reset --hard HEAD` before fetching Default: True |
sparse-checkout |
Optional | Do a sparse checkout on given patterns. Each pattern should be separated with new lines |
sparse-checkout-cone-mode |
Optional | Specifies whether to use cone-mode when doing a sparse checkout.
Default: True |
fetch-depth |
Optional | Number of commits to fetch. 0 indicates all history for all branches and tags. Default: 1 |
lfs |
Optional | Whether to download Git-LFS files |
submodules |
Optional | Whether to checkout submodules: `true` to checkout submodules or `recursive` to recursively checkout submodules. When the `ssh-key` input is not provided, SSH URLs beginning with `git@github.com:` are converted to HTTPS. |
set-safe-directory |
Optional | Add repository path as safe.directory for Git global config by running `git config --global --add safe.directory <path>` Default: True |
github-server-url |
Optional | The base URL for the GitHub instance that you are trying to clone from, will use environment defaults to fetch from the same instance that the workflow is running from unless specified. Example URLs are https://github.com or https://my-ghes-server.example.com |
name: 'Checkout'
description: 'Checkout a Git repository at a particular version'
inputs:
repository:
description: 'Repository name with owner. For example, actions/checkout'
default: ${{ github.repository }}
ref:
description: >
The branch, tag or SHA to checkout. When checking out the repository that
triggered a workflow, this defaults to the reference or SHA for that
event. Otherwise, uses the default branch.
token:
description: >
Personal access token (PAT) used to fetch the repository. The PAT is configured
with the local git config, which enables your scripts to run authenticated git
commands. The post-job step removes the PAT.
We recommend using a service account with the least permissions necessary.
Also when generating a new PAT, select the least scopes necessary.
[Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
default: ${{ github.token }}
ssh-key:
description: >
SSH key used to fetch the repository. The SSH key is configured with the local
git config, which enables your scripts to run authenticated git commands.
The post-job step removes the SSH key.
We recommend using a service account with the least permissions necessary.
[Learn more about creating and using
encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
ssh-known-hosts:
description: >
Known hosts in addition to the user and global host key database. The public
SSH keys for a host may be obtained using the utility `ssh-keyscan`. For example,
`ssh-keyscan github.com`. The public key for github.com is always implicitly added.
ssh-strict:
description: >
Whether to perform strict host key checking. When true, adds the options `StrictHostKeyChecking=yes`
and `CheckHostIP=no` to the SSH command line. Use the input `ssh-known-hosts` to
configure additional hosts.
default: true
persist-credentials:
description: 'Whether to configure the token or SSH key with the local git config'
default: true
path:
description: 'Relative path under $GITHUB_WORKSPACE to place the repository'
clean:
description: 'Whether to execute `git clean -ffdx && git reset --hard HEAD` before fetching'
default: true
sparse-checkout:
description: >
Do a sparse checkout on given patterns.
Each pattern should be separated with new lines
default: null
sparse-checkout-cone-mode:
description: >
Specifies whether to use cone-mode when doing a sparse checkout.
default: true
fetch-depth:
description: 'Number of commits to fetch. 0 indicates all history for all branches and tags.'
default: 1
lfs:
description: 'Whether to download Git-LFS files'
default: false
submodules:
description: >
Whether to checkout submodules: `true` to checkout submodules or `recursive` to
recursively checkout submodules.
When the `ssh-key` input is not provided, SSH URLs beginning with `git@github.com:` are
converted to HTTPS.
default: false
set-safe-directory:
description: Add repository path as safe.directory for Git global config by running `git config --global --add safe.directory <path>`
default: true
github-server-url:
description: The base URL for the GitHub instance that you are trying to clone from, will use environment defaults to fetch from the same instance that the workflow is running from unless specified. Example URLs are https://github.com or https://my-ghes-server.example.com
required: false
runs:
using: node16
main: dist/index.js
post: dist/index.js
Action ID: marketplace/azure/bicep-build-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/bicep-build-action
Build an ARM template from Bicep main file
| Name | Required | Description |
|---|---|---|
bicepFilePath |
Required | Bicep main file path Default: ./main.bicep |
outputFilePath |
Optional | ARM template output path Default: ./azuredeploy.json |
name: bicep-build-output
description: Build an ARM template from Bicep main file
branding:
icon: check
color: blue
inputs:
bicepFilePath:
description: Bicep main file path
required: true
default: ./main.bicep
outputFilePath:
description: ARM template output path
required: false
default: ./azuredeploy.json
runs:
using: composite
steps:
- run: az bicep build --file "${{ inputs.bicepFilePath }}" --outfile "${{ inputs.outputFilePath }}"
shell: bash
Action ID: marketplace/peter-evans/python-action
Author: Peter Evans
Publisher: peter-evans
Repository: github.com/peter-evans/python-action
A template to bootstrap the creation of a multi-platform Python GitHub action
| Name | Required | Description |
|---|---|---|
message |
Required | A message to the world Default: Hello World! |
sender |
Required | The sender of the message Default: Peter |
| Name | Description |
|---|---|
reply |
A reply from the world |
name: 'Python Action'
author: Peter Evans
description: 'A template to bootstrap the creation of a multi-platform Python GitHub action'
inputs:
message:
description: 'A message to the world'
required: true
default: 'Hello World!'
sender:
description: 'The sender of the message'
required: true
default: 'Peter'
outputs:
reply:
description: 'A reply from the world'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
icon: 'activity'
color: 'yellow'
Action ID: marketplace/vsoch/codestats
Author: Unknown
Publisher: vsoch
Repository: github.com/vsoch/codestats
Collect basic metrics about repository health and file content.
| Name | Required | Description |
|---|---|---|
repository |
Optional | Repository to get stats for. |
org |
Optional | Organization to get stats for |
config |
Optional | Config file to derive stats from |
outfile |
Optional | Output file to write to (defaults to org.json or repo.json) |
metric |
Optional | Instead of a default or config to derive metrics, use this single or comma separated list |
skip |
Optional | Pattern to skip (for org extraction only) |
pattern |
Optional | Pattern to include (for org extraction only) |
name: 'Codestats Action'
description: 'Collect basic metrics about repository health and file content.'
inputs:
repository:
description: 'Repository to get stats for.'
required: false
org:
description: 'Organization to get stats for'
required: false
config:
description: 'Config file to derive stats from'
required: false
outfile:
description: "Output file to write to (defaults to org.json or repo.json)"
required: false
metric:
description: Instead of a default or config to derive metrics, use this single or comma separated list
required: false
skip:
description: 'Pattern to skip (for org extraction only)'
required: false
pattern:
description: 'Pattern to include (for org extraction only)'
required: false
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'activity'
color: 'white'
Action ID: marketplace/wangyoucao577/Publish-Docker-Github-Action
Author: Lars Gohr
Publisher: wangyoucao577
Repository: github.com/wangyoucao577/Publish-Docker-Github-Action
Uses the git branch as the docker tag and pushes the container
| Name | Required | Description |
|---|---|---|
name |
Required | The name of the image you would like to push |
username |
Required | The login username for the registry |
password |
Required | The login password for the registry |
registry |
Optional | Use registry for pushing to a custom registry |
snapshot |
Optional | Use snapshot to push an additional image |
default_branch |
Optional | Set the default branch of your repository (default: master) |
dockerfile |
Optional | Use dockerfile when you would like to explicitly build a Dockerfile |
workdir |
Optional | Use workdir when you would like to change the directory for building |
context |
Optional | Use context when you would like to change the Docker build context. |
buildargs |
Optional | Use buildargs when you want to pass a list of environment variables as build-args |
buildoptions |
Optional | Use buildoptions when you want to configure options for building |
cache |
Optional | Use cache when you have big images, that you would only like to build partially |
tags |
Optional | Use tags when you want to bring your own tags (separated by comma) |
tag_names |
Optional | Use tag_names when you want to push tags/release by their git name |
tag_semver |
Optional | Push semver docker tags. e.g. image:1.2.3, image:1.2, image:1 |
no_push |
Optional | Set no_push to true if you want to prevent the action from pushing to a registry (default: false) |
platforms |
Optional | Use platforms for building multi-arch images |
| Name | Description |
|---|---|
tag |
Is the tag, which was pushed |
snapshot-tag |
Is the tag that is generated by the snapshot-option and pushed |
digest |
Is the digest of the image, which was pushed |
name: 'Publish Docker'
author: 'Lars Gohr'
branding:
icon: 'anchor'
color: 'blue'
description: 'Uses the git branch as the docker tag and pushes the container'
inputs:
name:
description: 'The name of the image you would like to push'
required: true
username:
description: 'The login username for the registry'
required: true
password:
description: 'The login password for the registry'
required: true
registry:
description: 'Use registry for pushing to a custom registry'
required: false
snapshot:
description: 'Use snapshot to push an additional image'
required: false
default_branch:
description: 'Set the default branch of your repository (default: master)'
required: false
dockerfile:
description: 'Use dockerfile when you would like to explicitly build a Dockerfile'
required: false
workdir:
description: 'Use workdir when you would like to change the directory for building'
required: false
context:
description: 'Use context when you would like to change the Docker build context.'
required: false
buildargs:
description: 'Use buildargs when you want to pass a list of environment variables as build-args'
required: false
buildoptions:
description: 'Use buildoptions when you want to configure options for building'
required: false
cache:
description: 'Use cache when you have big images, that you would only like to build partially'
required: false
tags:
description: 'Use tags when you want to bring your own tags (separated by comma)'
required: false
tag_names:
description: 'Use tag_names when you want to push tags/release by their git name'
required: false
tag_semver:
description: 'Push semver docker tags. e.g. image:1.2.3, image:1.2, image:1'
required: false
no_push:
description: 'Set no_push to true if you want to prevent the action from pushing to a registry (default: false)'
required: false
platforms:
description: 'Use platforms for building multi-arch images'
required: false
outputs:
tag:
description: 'Is the tag, which was pushed'
value: ${{ steps.docker-publish.outputs.tag }}
snapshot-tag:
description: 'Is the tag that is generated by the snapshot-option and pushed'
value: ${{ steps.docker-publish.outputs.snapshot-tag }}
digest:
description: 'Is the digest of the image, which was pushed'
value: ${{ steps.docker-publish.outputs.digest }}
runs:
using: 'composite'
steps:
- id: docker-publish
run: $GITHUB_ACTION_PATH/entrypoint.sh
shell: bash
env:
INPUT_NAME: ${{ inputs.name }}
INPUT_USERNAME: ${{ inputs.username }}
INPUT_PASSWORD: ${{ inputs.password }}
INPUT_REGISTRY: ${{ inputs.registry }}
INPUT_SNAPSHOT: ${{ inputs.snapshot }}
INPUT_DEFAULT_BRANCH: ${{ inputs.default_branch }}
INPUT_DOCKERFILE: ${{ inputs.dockerfile }}
INPUT_WORKDIR: ${{ inputs.workdir }}
INPUT_CONTEXT: ${{ inputs.context }}
INPUT_BUILDARGS: ${{ inputs.buildargs }}
INPUT_BUILDOPTIONS: ${{ inputs.buildoptions }}
INPUT_CACHE: ${{ inputs.cache }}
INPUT_TAGS: ${{ inputs.tags }}
INPUT_TAG_NAMES: ${{ inputs.tag_names }}
INPUT_TAG_SEMVER: ${{ inputs.tag_semver }}
INPUT_NO_PUSH: ${{ inputs.no_push }}
INPUT_PLATFORMS: ${{ inputs.platforms }}
Action ID: marketplace/amirisback/easy-kotlin-lib-jar
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/easy-kotlin-lib-jar
Easy Sample Kotlin Library
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Easy Kotlin Lib Jar'
description: 'Easy Sample Kotlin Library'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/android-glide
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-glide
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/DeLaGuardo/setup-clojure
Author: DeLaGuardo
Publisher: DeLaGuardo
Repository: github.com/DeLaGuardo/setup-clojure
Setup your runner with Clojure build tools
| Name | Required | Description |
|---|---|---|
lein |
Optional | The Leiningen version to make available on the path. |
boot |
Optional | The boot-clj version to make available on the path. |
tools-deps |
Optional | [DEPRECATED] The tools deps version to make available on the path. |
cli |
Optional | Clojure CLI version to make available on the path. |
cmd-exe-workaround |
Optional | [DEPRECATED] On Windows platform, it will replace official Clojure CLI with the `deps.clj` of its specific version, `latest` can be used. Useful for running `clojure` command from `cmd.exe`. |
bb |
Optional | Babashka version to install, `latest` can be used. |
clj-kondo |
Optional | Clj-kondo version to install, `latest` can be used. |
cljfmt |
Optional | cljfmt version to install, `latest` can be used. |
cljstyle |
Optional | cljstyle version to install, `latest` can be used. |
zprint |
Optional | zprint version to install, `latest` can be used. |
github-token |
Optional | To fix rate limit errors, provide `secrets.GITHUB_TOKEN` value to this field. More info: https://docs.github.com/en/actions/security-guides/automatic-token-authentication
Default: ${{ github.token }} |
invalidate-cache |
Optional | Set to `true` to fix problems related to wrongly populated tool cache
Default: false |
name: 'Setup Clojure'
description: 'Setup your runner with Clojure build tools'
author: 'DeLaGuardo'
branding:
icon: 'gift'
color: 'blue'
inputs:
lein:
description: 'The Leiningen version to make available on the path.'
boot:
description: 'The boot-clj version to make available on the path.'
tools-deps:
description: '[DEPRECATED] The tools deps version to make available on the path.'
deprecationMessage: 'Use the `cli` input instead'
cli:
description: 'Clojure CLI version to make available on the path.'
cmd-exe-workaround:
description: >+
[DEPRECATED]
On Windows platform, it will replace official Clojure CLI
with the `deps.clj` of its specific version, `latest` can be used.
Useful for running `clojure` command from `cmd.exe`.
deprecationMessage: 'No longer needed. Please remove and use unified `cli` input instead'
bb:
description: 'Babashka version to install, `latest` can be used.'
clj-kondo:
description: 'Clj-kondo version to install, `latest` can be used.'
cljfmt:
description: 'cljfmt version to install, `latest` can be used.'
cljstyle:
description: 'cljstyle version to install, `latest` can be used.'
zprint:
description: 'zprint version to install, `latest` can be used.'
github-token:
description: >+
To fix rate limit errors, provide `secrets.GITHUB_TOKEN` value to this field.
More info: https://docs.github.com/en/actions/security-guides/automatic-token-authentication
default: ${{ github.token }}
required: false
invalidate-cache:
description: >+
Set to `true` to fix problems related to wrongly populated tool cache
default: 'false'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/julia-actions/build-julia
Author: Sascha Mann
Publisher: julia-actions
Repository: github.com/julia-actions/build-julia
Build Julia from source for a given commit.
| Name | Required | Description |
|---|---|---|
ref |
Optional | Value passed to git checkout. Default: master |
source-repo |
Optional | Value passed to git clone. Default: https://github.com/JuliaLang/julia.git |
target-dir |
Optional | Directory that Julia will be installed in Default: $HOME/julia |
name: Build Julia
description: Build Julia from source for a given commit.
author: Sascha Mann
branding:
icon: cpu
color: purple
inputs:
ref:
description: Value passed to git checkout.
default: master
required: false
source-repo:
description: Value passed to git clone.
default: https://github.com/JuliaLang/julia.git
required: false
target-dir:
description: Directory that Julia will be installed in
default: $HOME/julia
required: false
runs:
using: composite
steps:
- name: Clone Julia
run: git clone ${{ inputs.source-repo }} "${{ inputs.target-dir }}"
shell: bash
- name: Checkout ref
run: |
cd "${{ inputs.target-dir }}"
git checkout ${{ inputs.ref }}
shell: bash
- name: Build Julia
run: |
cd "${{ inputs.target-dir }}"
# GHA runners don't all run on the same hardware
# Not specifying a CPU target can lead to issues when caching the build output
echo "JULIA_CPU_TARGET=x86_64" >> Make.user
make -j 2
shell: bash
- name: Print Version
run: |
cd "${{ inputs.target-dir }}"
./julia --version
shell: bash
Action ID: marketplace/DeLaGuardo/super-linter
Author: GitHub
Publisher: DeLaGuardo
Repository: github.com/DeLaGuardo/super-linter
It is a simple combination of various linters, written in bash, to help validate your source code.
name: 'Super-Linter'
author: 'GitHub'
description: 'It is a simple combination of various linters, written in bash, to help validate your source code.'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'check-square'
color: 'white'
Action ID: marketplace/DeLaGuardo/clj-kondo-lint-action
Author: Your name or organization here
Publisher: DeLaGuardo
Repository: github.com/DeLaGuardo/clj-kondo-lint-action
Provide a description here
| Name | Required | Description |
|---|---|---|
myInput |
Optional | input description here Default: default value if applicable |
name: 'Your name here'
description: 'Provide a description here'
author: 'Your name or organization here'
inputs:
myInput: # change this
description: 'input description here'
default: 'default value if applicable'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/amirisback/frogo-log
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/frogo-log
SDK for your Log problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Log'
description: 'SDK for your Log problem to make easier developing android apps'
author: 'Muhammad Faisal Amir'
branding:
icon: 'archive'
color: 'green'
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/pantaucovid-android
Author: Fiqry Choirudin
Publisher: amirisback
Repository: github.com/amirisback/pantaucovid-android
Pantau Covid19 Deprecated Version
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Pantau Covid'
description: 'Pantau Covid19 Deprecated Version'
author: 'Fiqry Choirudin'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/jidicula/humans.txt
Author: Unknown
Publisher: jidicula
Repository: github.com/jidicula/humans.txt
List out the humans who help feed and tend the robots of GitHub Actions
| Name | Required | Description |
|---|---|---|
format |
Optional | How to output the people of actions - txt, json, html or ascii Default: ascii |
output |
Optional | Where to output the file - stdout otherwise |
name: 'GitHub Actions humans.txt'
description: 'List out the humans who help feed and tend the robots of GitHub Actions'
inputs:
format:
description: 'How to output the people of actions - txt, json, html or ascii'
default: 'ascii'
output:
description: 'Where to output the file - stdout otherwise'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- "${{ inputs.format }}"
- "${{ inputs.output }}"
Action ID: marketplace/actions/setup-node
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-node
Setup a Node.js environment by adding problem matchers and optionally downloading and adding it to the PATH.
| Name | Required | Description |
|---|---|---|
node-version |
Optional | Version Spec of the version to use. Examples: 12.x, 10.15.1, >=10.15.0. |
node-version-file |
Optional | File containing the version Spec of the version to use. Examples: package.json, .nvmrc, .node-version, .tool-versions. |
architecture |
Optional | Target architecture for Node to use. Examples: x86, x64. Will use system architecture by default. |
check-latest |
Optional | Set this option if you want the action to check for the latest available version that satisfies the version spec. |
registry-url |
Optional | Optional registry to set up for auth. Will set the registry in a project level .npmrc and .yarnrc file, and set up auth to read in from env.NODE_AUTH_TOKEN. |
scope |
Optional | Optional scope for authenticating against scoped registries. Will fall back to the repository owner when using the GitHub Packages registry (https://npm.pkg.github.com/). |
token |
Optional | Used to pull node distributions from node-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
cache |
Optional | Used to specify a package manager for caching in the default directory. Supported values: npm, yarn, pnpm. |
package-manager-cache |
Optional | Set to false to disable automatic caching. By default, caching is enabled when either devEngines.packageManager or the top-level packageManager field in package.json specifies npm as the package manager. Default: True |
cache-dependency-path |
Optional | Used to specify the path to a dependency file: package-lock.json, yarn.lock, etc. Supports wildcards or a list of file names for caching multiple dependencies. |
mirror |
Optional | Used to specify an alternative mirror to downlooad Node.js binaries from |
mirror-token |
Optional | The token used as Authorization header when fetching from the mirror |
| Name | Description |
|---|---|
cache-hit |
A boolean value to indicate if a cache was hit. |
node-version |
The installed node version. |
name: 'Setup Node.js environment'
description: 'Setup a Node.js environment by adding problem matchers and optionally downloading and adding it to the PATH.'
author: 'GitHub'
inputs:
node-version:
description: 'Version Spec of the version to use. Examples: 12.x, 10.15.1, >=10.15.0.'
node-version-file:
description: 'File containing the version Spec of the version to use. Examples: package.json, .nvmrc, .node-version, .tool-versions.'
architecture:
description: 'Target architecture for Node to use. Examples: x86, x64. Will use system architecture by default.'
check-latest:
description: 'Set this option if you want the action to check for the latest available version that satisfies the version spec.'
default: false
registry-url:
description: 'Optional registry to set up for auth. Will set the registry in a project level .npmrc and .yarnrc file, and set up auth to read in from env.NODE_AUTH_TOKEN.'
scope:
description: 'Optional scope for authenticating against scoped registries. Will fall back to the repository owner when using the GitHub Packages registry (https://npm.pkg.github.com/).'
token:
description: Used to pull node distributions from node-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
cache:
description: 'Used to specify a package manager for caching in the default directory. Supported values: npm, yarn, pnpm.'
package-manager-cache:
description: 'Set to false to disable automatic caching. By default, caching is enabled when either devEngines.packageManager or the top-level packageManager field in package.json specifies npm as the package manager.'
default: true
cache-dependency-path:
description: 'Used to specify the path to a dependency file: package-lock.json, yarn.lock, etc. Supports wildcards or a list of file names for caching multiple dependencies.'
mirror:
description: 'Used to specify an alternative mirror to downlooad Node.js binaries from'
mirror-token:
description: 'The token used as Authorization header when fetching from the mirror'
# TODO: add input to control forcing to pull from cloud or dist.
# escape valve for someone having issues or needing the absolute latest which isn't cached yet
outputs:
cache-hit:
description: 'A boolean value to indicate if a cache was hit.'
node-version:
description: 'The installed node version.'
runs:
using: 'node24'
main: 'dist/setup/index.js'
post: 'dist/cache-save/index.js'
post-if: success()
Action ID: marketplace/dflook/terraform-apply
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-apply
Apply a Terraform plan
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the Terraform root module to apply Default: . |
workspace |
Optional | Terraform workspace to run the apply in Default: default |
label |
Optional | A friendly name for the environment the Terraform configuration is for. This will be used in the PR comment for easy identification. It must be the same as the `label` used in the corresponding [`dflook/terraform-plan`](https://github.com/dflook/terraform-github-actions/tree/main/terraform-plan) action. |
variables |
Optional | Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
replace |
Optional | List of resources to replace, one per line. |
target |
Optional | List of resources to apply, one per line. The apply operation will be limited to these resources and their dependencies. |
destroy |
Optional | Set to `true` to destroy all resources.
This generates and applies a plan in [destroy mode](https://developer.hashicorp.com/terraform/cli/commands/plan#planning-modes).
Default: false |
refresh |
Optional | Set to `false` to skip synchronisation of the Terraform state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
Default: true |
plan_path |
Optional | Path to a plan file to apply. This would have been generated by a previous [`dflook/terraform-plan`](https://github.com/dflook/terraform-github-actions/tree/main/terraform-plan) action. The default behaviour when this is not set is to generate a plan from the current configuration and compare it to the plan attached to the PR comment. If it is logically the same, the plan will be applied. When this is set to a plan file, the plan will not be generated again. If it is the exact same plan as the one attached to the PR comment, it will be applied. This will be faster than generating a new plan. There are downsides to applying a stored plan: - The plan may contain sensitive information so must be stored securely, possibly outside of GitHub. - It does not account for any changes that have occurred since it was generated, and may no longer be correct. - Plans must be generated and applied in strict order. Multiple open PRs will cause conflicts if they are applied out of order. - Plans are not portable between platforms. - Terraform and provider versions must match between the plan generation and apply. When `auto_approve` is set to `true`, the plan will be applied without checking if it is the same as the one attached to the PR comment. |
auto_approve |
Optional | When set to `true`, plans are always applied.
The default is `false`, which requires plans to have been added to a pull request comment.
Default: false |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
json_plan_path |
This is the path to the generated plan in [JSON Output Format](https://www.terraform.io/docs/internals/json-format.html). The path is relative to the Actions workspace. Terraform plans often contain sensitive information, so this output should be treated with care. This won't be set if the backend type is `remote` - Terraform does not support saving remote plans. |
text_plan_path |
This is the path to the generated plan in a human-readable format. The path is relative to the Actions workspace. This won't be set if `auto_approve` is true while using a `remote` backend. |
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `apply-failed` - The Terraform apply operation failed. - `plan-changed` - The approved plan is no longer accurate, so the apply will not be attempted. - `state-locked` - The Terraform state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
run_id |
If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id. |
json_output_path |
This is the path to all the root module outputs in a JSON file. The path is relative to the Actions workspace. For example, with the Terraform config: ```hcl output "service_hostname" { value = "example.com" } ``` The file pointed to by this output will contain: ```json { "service_hostname": "example.com" } ``` Terraform list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object. |
name: terraform-apply
description: Apply a Terraform plan
author: Daniel Flook
inputs:
path:
description: Path to the Terraform root module to apply
required: false
default: "."
workspace:
description: Terraform workspace to run the apply in
required: false
default: "default"
label:
description: |
A friendly name for the environment the Terraform configuration is for.
This will be used in the PR comment for easy identification.
It must be the same as the `label` used in the corresponding [`dflook/terraform-plan`](https://github.com/dflook/terraform-github-actions/tree/main/terraform-plan) action.
required: false
default: ""
variables:
description: |
Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
replace:
description: List of resources to replace, one per line.
required: false
default: ""
target:
description: |
List of resources to apply, one per line.
The apply operation will be limited to these resources and their dependencies.
required: false
default: ""
destroy:
description: |
Set to `true` to destroy all resources.
This generates and applies a plan in [destroy mode](https://developer.hashicorp.com/terraform/cli/commands/plan#planning-modes).
required: false
default: "false"
refresh:
description: |
Set to `false` to skip synchronisation of the Terraform state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
required: false
default: "true"
plan_path:
description: |
Path to a plan file to apply. This would have been generated by a previous [`dflook/terraform-plan`](https://github.com/dflook/terraform-github-actions/tree/main/terraform-plan) action.
The default behaviour when this is not set is to generate a plan from the current configuration and compare it to the plan attached to the PR comment.
If it is logically the same, the plan will be applied.
When this is set to a plan file, the plan will not be generated again. If it is the exact same plan as the one attached to the PR comment, it will be applied.
This will be faster than generating a new plan.
There are downsides to applying a stored plan:
- The plan may contain sensitive information so must be stored securely, possibly outside of GitHub.
- It does not account for any changes that have occurred since it was generated, and may no longer be correct.
- Plans must be generated and applied in strict order. Multiple open PRs will cause conflicts if they are applied out of order.
- Plans are not portable between platforms.
- Terraform and provider versions must match between the plan generation and apply.
When `auto_approve` is set to `true`, the plan will be applied without checking if it is the same as the one attached to the PR comment.
required: false
default: ""
auto_approve:
description: |
When set to `true`, plans are always applied.
The default is `false`, which requires plans to have been added to a pull request comment.
required: false
default: "false"
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
json_plan_path:
description: |
This is the path to the generated plan in [JSON Output Format](https://www.terraform.io/docs/internals/json-format.html).
The path is relative to the Actions workspace.
Terraform plans often contain sensitive information, so this output should be treated with care.
This won't be set if the backend type is `remote` - Terraform does not support saving remote plans.
text_plan_path:
description: |
This is the path to the generated plan in a human-readable format.
The path is relative to the Actions workspace.
This won't be set if `auto_approve` is true while using a `remote` backend.
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `apply-failed` - The Terraform apply operation failed.
- `plan-changed` - The approved plan is no longer accurate, so the apply will not be attempted.
- `state-locked` - The Terraform state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
run_id:
description: If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id.
json_output_path:
description: |
This is the path to all the root module outputs in a JSON file.
The path is relative to the Actions workspace.
For example, with the Terraform config:
```hcl
output "service_hostname" {
value = "example.com"
}
```
The file pointed to by this output will contain:
```json
{
"service_hostname": "example.com"
}
```
Terraform list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/apply.sh
branding:
icon: globe
color: purple
Action ID: marketplace/axel-op/containerized-dart-github-action
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/containerized-dart-github-action
Greet someone and record the time
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Who to greet Default: World |
| Name | Description |
|---|---|
time |
The time we we greeted you |
name: "Hello World"
description: "Greet someone and record the time"
inputs:
who-to-greet: # id of input
description: "Who to greet"
required: true
default: "World"
outputs:
time: # id of output
description: "The time we we greeted you"
runs:
using: "docker"
image: "Dockerfile"
# Those override the ENTRYPOINT defined in the Dockerfile
pre-entrypoint: "/action/scripts/setup.sh" # Delete this line to not run the setup script
entrypoint: "/action/scripts/main.sh"
post-entrypoint: "/action/scripts/cleanup.sh" # Delete this line to not run the cleanup script
Action ID: marketplace/codecov/codecov-action
Author: Thomas Hu <@thomasrockhu-codecov> | Codecov
Publisher: codecov
Repository: github.com/codecov/codecov-action
GitHub Action that uploads coverage reports for your repository to codecov.io
| Name | Required | Description |
|---|---|---|
base_sha |
Optional | The base SHA to select. This is only used in the "pr-base-picking" run command |
binary |
Optional | The file location of a pre-downloaded version of the CLI. If specified, integrity checking will be bypassed. |
codecov_yml_path |
Optional | The location of the codecov.yml file. This is crrently ONLY used for automated test selection (https://docs.codecov.com/docs/getting-started-with-ats). Note that for all other cases, the Codecov yaml will need to be located as described here: https://docs.codecov.com/docs/codecov-yaml#can-i-name-the-file-codecovyml |
commit_parent |
Optional | SHA (with 40 chars) of what should be the parent of this commit. |
directory |
Optional | Folder to search for coverage files. Default to the current working directory |
disable_file_fixes |
Optional | Disable file fixes to ignore common lines from coverage (e.g. blank lines or empty brackets). Read more here https://docs.codecov.com/docs/fixing-reports Default: false |
disable_search |
Optional | Disable search for coverage files. This is helpful when specifying what files you want to upload with the files option. Default: false |
disable_safe_directory |
Optional | Disable setting safe directory. Set to true to disable. Default: false |
disable_telem |
Optional | Disable sending telemetry data to Codecov. Set to true to disable. Default: false |
dry_run |
Optional | Don't upload files to Codecov Default: false |
env_vars |
Optional | Environment variables to tag the upload with (e.g. PYTHON | OS,PYTHON) |
exclude |
Optional | Comma-separated list of folders to exclude from search. |
fail_ci_if_error |
Optional | On error, exit with non-zero code Default: false |
files |
Optional | Comma-separated list of explicit files to upload. These will be added to the coverage files found for upload. If you wish to only upload the specified files, please consider using disable_search to disable uploading other files. |
flags |
Optional | Comma-separated list of flags to upload to group coverage metrics. |
force |
Optional | Only used for empty-upload run command |
git_service |
Optional | Override the git_service (e.g. github_enterprise) Default: github |
gcov_args |
Optional | Extra arguments to pass to gcov |
gcov_executable |
Optional | gcov executable to run. Defaults to 'gcov' Default: gcov |
gcov_ignore |
Optional | Paths to ignore during gcov gathering |
gcov_include |
Optional | Paths to include during gcov gathering |
handle_no_reports_found |
Optional | If no coverage reports are found, do not raise an exception. Default: false |
job_code |
Optional | |
name |
Optional | Custom defined name of the upload. Visible in the Codecov UI |
network_filter |
Optional | Specify a filter on the files listed in the network section of the Codecov report. This will only add files whose path begin with the specified filter. Useful for upload-specific path fixing. |
network_prefix |
Optional | Specify a prefix on files listed in the network section of the Codecov report. Useful to help resolve path fixing. |
os |
Optional | Override the assumed OS. Options available at cli.codecov.io |
override_branch |
Optional | Specify the branch to be displayed with this commit on Codecov |
override_build |
Optional | Specify the build number manually |
override_build_url |
Optional | The URL of the build where this is running |
override_commit |
Optional | Commit SHA (with 40 chars) |
override_pr |
Optional | Specify the pull request number manually. Used to override pre-existing CI environment variables. |
plugins |
Optional | Comma-separated list of plugins to run. Specify `noop` to turn off all plugins |
recurse_submodules |
Optional | Whether to enumerate files inside of submodules for path-fixing purposes. Off by default. Default: false |
report_code |
Optional | The code of the report if using local upload. If unsure, leave default. Read more here https://docs.codecov.com/docs/the-codecov-cli#how-to-use-local-upload |
report_type |
Optional | The type of file to upload, coverage by default. Possible values are "test_results", "coverage". |
root_dir |
Optional | Root folder from which to consider paths on the network section. Defaults to current working directory. |
run_command |
Optional | Choose which CLI command to run. Options are "upload-coverage", "empty-upload", "pr-base-picking", "send-notifications". "upload-coverage" is run by default. Default: upload-coverage |
skip_validation |
Optional | Skip integrity checking of the CLI. This is NOT recommended. Default: false |
slug |
Optional | [Required when using the org token] Set to the owner/repo slug used instead of the private repo token. Only applicable to some Enterprise users. |
swift_project |
Optional | Specify the swift project name. Useful for optimization. |
token |
Optional | Repository Codecov token. Used to authorize report uploads |
url |
Optional | Set to the Codecov instance URl. Used by Dedicated Enterprise Cloud customers. |
use_legacy_upload_endpoint |
Optional | Use the legacy upload endpoint. Default: false |
use_oidc |
Optional | Use OIDC instead of token. This will ignore any token supplied Default: false |
use_pypi |
Optional | Use the pypi version of the CLI instead of from cli.codecov.io Default: false |
verbose |
Optional | Enable verbose logging Default: false |
version |
Optional | Which version of the Codecov CLI to use (defaults to 'latest') Default: latest |
working-directory |
Optional | Directory in which to execute codecov.sh |
---
# yamllint disable rule:line-length
name: 'Codecov'
description: 'GitHub Action that uploads coverage reports for your repository to codecov.io'
author: 'Thomas Hu <@thomasrockhu-codecov> | Codecov'
inputs:
base_sha:
description: 'The base SHA to select. This is only used in the "pr-base-picking" run command'
required: false
binary:
description: 'The file location of a pre-downloaded version of the CLI. If specified, integrity checking will be bypassed.'
required: false
codecov_yml_path:
description: 'The location of the codecov.yml file. This is crrently ONLY used for automated test selection (https://docs.codecov.com/docs/getting-started-with-ats). Note that for all other cases, the Codecov yaml will need to be located as described here: https://docs.codecov.com/docs/codecov-yaml#can-i-name-the-file-codecovyml'
required: false
commit_parent:
description: 'SHA (with 40 chars) of what should be the parent of this commit.'
required: false
directory:
description: 'Folder to search for coverage files. Default to the current working directory'
required: false
disable_file_fixes:
description: 'Disable file fixes to ignore common lines from coverage (e.g. blank lines or empty brackets). Read more here https://docs.codecov.com/docs/fixing-reports'
required: false
default: 'false'
disable_search:
description: 'Disable search for coverage files. This is helpful when specifying what files you want to upload with the files option.'
required: false
default: 'false'
disable_safe_directory:
description: 'Disable setting safe directory. Set to true to disable.'
required: false
default: 'false'
disable_telem:
description: 'Disable sending telemetry data to Codecov. Set to true to disable.'
required: false
default: 'false'
dry_run:
description: "Don't upload files to Codecov"
required: false
default: 'false'
env_vars:
description: 'Environment variables to tag the upload with (e.g. PYTHON | OS,PYTHON)'
required: false
exclude:
description: 'Comma-separated list of folders to exclude from search.'
required: false
fail_ci_if_error:
description: 'On error, exit with non-zero code'
required: false
default: 'false'
files:
description: 'Comma-separated list of explicit files to upload. These will be added to the coverage files found for upload. If you wish to only upload the specified files, please consider using disable_search to disable uploading other files.'
required: false
flags:
description: 'Comma-separated list of flags to upload to group coverage metrics.'
required: false
force:
description: 'Only used for empty-upload run command'
required: false
git_service:
description: 'Override the git_service (e.g. github_enterprise)'
required: false
default: 'github'
gcov_args:
description: 'Extra arguments to pass to gcov'
required: false
gcov_executable:
description: "gcov executable to run. Defaults to 'gcov'"
required: false
default: 'gcov'
gcov_ignore:
description: 'Paths to ignore during gcov gathering'
required: false
gcov_include:
description: "Paths to include during gcov gathering"
required: false
handle_no_reports_found:
description: 'If no coverage reports are found, do not raise an exception.'
required: false
default: 'false'
job_code:
description: ''
required: false
name:
description: 'Custom defined name of the upload. Visible in the Codecov UI'
required: false
network_filter:
description: 'Specify a filter on the files listed in the network section of the Codecov report. This will only add files whose path begin with the specified filter. Useful for upload-specific path fixing.'
required: false
network_prefix:
description: 'Specify a prefix on files listed in the network section of the Codecov report. Useful to help resolve path fixing.'
required: false
os:
description: 'Override the assumed OS. Options available at cli.codecov.io'
required: false
override_branch:
description: 'Specify the branch to be displayed with this commit on Codecov'
required: false
override_build:
description: 'Specify the build number manually'
required: false
override_build_url:
description: 'The URL of the build where this is running'
required: false
override_commit:
description: 'Commit SHA (with 40 chars)'
required: false
override_pr:
description: 'Specify the pull request number manually. Used to override pre-existing CI environment variables.'
required: false
plugins:
description: 'Comma-separated list of plugins to run. Specify `noop` to turn off all plugins'
required: false
recurse_submodules:
description: 'Whether to enumerate files inside of submodules for path-fixing purposes. Off by default.'
default: 'false'
report_code:
description: 'The code of the report if using local upload. If unsure, leave default. Read more here https://docs.codecov.com/docs/the-codecov-cli#how-to-use-local-upload'
required: false
report_type:
description: 'The type of file to upload, coverage by default. Possible values are "test_results", "coverage".'
required: false
root_dir:
description: 'Root folder from which to consider paths on the network section. Defaults to current working directory.'
required: false
run_command:
description: 'Choose which CLI command to run. Options are "upload-coverage", "empty-upload", "pr-base-picking", "send-notifications". "upload-coverage" is run by default.'
required: false
default: 'upload-coverage'
skip_validation:
description: 'Skip integrity checking of the CLI. This is NOT recommended.'
required: false
default: 'false'
slug:
description: '[Required when using the org token] Set to the owner/repo slug used instead of the private repo token. Only applicable to some Enterprise users.'
required: false
swift_project:
description: 'Specify the swift project name. Useful for optimization.'
required: false
token:
description: 'Repository Codecov token. Used to authorize report uploads'
required: false
url:
description: 'Set to the Codecov instance URl. Used by Dedicated Enterprise Cloud customers.'
required: false
use_legacy_upload_endpoint:
description: 'Use the legacy upload endpoint.'
required: false
default: 'false'
use_oidc:
description: 'Use OIDC instead of token. This will ignore any token supplied'
required: false
default: 'false'
use_pypi:
description: 'Use the pypi version of the CLI instead of from cli.codecov.io'
required: false
default: 'false'
verbose:
description: 'Enable verbose logging'
required: false
default: 'false'
version:
description: "Which version of the Codecov CLI to use (defaults to 'latest')"
required: false
default: 'latest'
working-directory:
description: 'Directory in which to execute codecov.sh'
required: false
branding:
color: 'red'
icon: 'umbrella'
runs:
using: "composite"
steps:
- name: Check system dependencies
shell: sh
run: |
missing_deps=""
# Check for always-required commands
for cmd in bash git curl; do
if ! command -v "$cmd" >/dev/null 2>&1; then
missing_deps="$missing_deps $cmd"
fi
done
# Check for gpg only if validation is not being skipped
if [ "${{ inputs.skip_validation }}" != "true" ]; then
if ! command -v gpg >/dev/null 2>&1; then
missing_deps="$missing_deps gpg"
fi
fi
# Report missing required dependencies
if [ -n "$missing_deps" ]; then
echo "Error: The following required dependencies are missing:$missing_deps"
echo "Please install these dependencies before using this action."
exit 1
fi
echo "All required system dependencies are available."
- name: Action version
shell: bash
run: |
CC_ACTION_VERSION=$(cat ${GITHUB_ACTION_PATH}/src/version)
echo -e "\033[0;32m==>\033[0m Running Action version $CC_ACTION_VERSION"
- name: Set safe directory
if: ${{ inputs.disable_safe_directory != 'true' }}
shell: bash
run: |
git config --global --add safe.directory "${{ github.workspace }}"
git config --global --add safe.directory "$GITHUB_WORKSPACE"
- name: Set fork
shell: bash
run: |
CC_FORK="false"
if [ -n "$GITHUB_EVENT_PULL_REQUEST_HEAD_REPO_FULL_NAME" ] && [ "$GITHUB_EVENT_PULL_REQUEST_HEAD_REPO_FULL_NAME" != "$GITHUB_REPOSITORY" ];
then
echo -e "\033[0;32m==>\033[0m Fork detected"
CC_FORK="true"
fi
echo "CC_FORK=$CC_FORK" >> "$GITHUB_ENV"
env:
GITHUB_EVENT_PULL_REQUEST_HEAD_LABEL: ${{ github.event.pull_request.head.label }}
GITHUB_EVENT_PULL_REQUEST_HEAD_REPO_FULL_NAME: ${{ github.event.pull_request.head.repo.full_name }}
GITHUB_REPOSITORY: ${{ github.repository }}
- name: Get OIDC token
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
id: oidc
with:
script: |
if (process.env.CC_USE_OIDC === 'true' && process.env.CC_FORK != 'true') {
const id_token = await core.getIDToken(process.env.CC_OIDC_AUDIENCE)
return id_token
}
env:
CC_OIDC_AUDIENCE: ${{ inputs.url || 'https://codecov.io' }}
CC_USE_OIDC: ${{ inputs.use_oidc }}
- name: Get and set token
shell: bash
run: |
if [ "${{ inputs.use_oidc }}" == 'true' ] && [ "$CC_FORK" != 'true' ];
then
echo "CC_TOKEN=$CC_OIDC_TOKEN" >> "$GITHUB_ENV"
elif [ -n "${{ env.CODECOV_TOKEN }}" ];
then
echo -e "\033[0;32m==>\033[0m Token set from env"
echo "CC_TOKEN=${{ env.CODECOV_TOKEN }}" >> "$GITHUB_ENV"
else
if [ -n "${{ inputs.token }}" ];
then
echo -e "\033[0;32m==>\033[0m Token set from input"
CC_TOKEN=$(echo "${{ inputs.token }}" | tr -d '\n')
echo "CC_TOKEN=$CC_TOKEN" >> "$GITHUB_ENV"
fi
fi
env:
CC_OIDC_TOKEN: ${{ steps.oidc.outputs.result }}
CC_OIDC_AUDIENCE: ${{ inputs.url || 'https://codecov.io' }}
- name: Override branch for forks
shell: bash
run: |
if [ -z "$CC_BRANCH" ] && [ -z "$CC_TOKEN" ] && [ "$CC_FORK" == 'true' ]
then
echo -e "\033[0;32m==>\033[0m Fork detected, setting branch to $GITHUB_EVENT_PULL_REQUEST_HEAD_LABEL"
TOKENLESS="$GITHUB_EVENT_PULL_REQUEST_HEAD_LABEL"
CC_BRANCH="$GITHUB_EVENT_PULL_REQUEST_HEAD_LABEL"
echo "TOKENLESS=$TOKENLESS" >> "$GITHUB_ENV"
fi
echo "CC_BRANCH=$CC_BRANCH" >> "$GITHUB_ENV"
env:
CC_BRANCH: ${{ inputs.override_branch }}
GITHUB_EVENT_PULL_REQUEST_HEAD_LABEL: ${{ github.event.pull_request.head.label }}
GITHUB_EVENT_PULL_REQUEST_HEAD_REPO_FULL_NAME: ${{ github.event.pull_request.head.repo.full_name }}
GITHUB_REPOSITORY: ${{ github.repository }}
- name: Override commits and pr for pull requests
shell: bash
run: |
if [ -z "$CC_SHA" ];
then
CC_SHA="$GITHUB_EVENT_PULL_REQUEST_HEAD_SHA"
fi
if [ -z "$CC_PR" ] && [ "$CC_FORK" == 'true' ];
then
CC_PR="$GITHUB_EVENT_NUMBER"
fi
echo "CC_SHA=$CC_SHA" >> "$GITHUB_ENV"
echo "CC_PR=$CC_PR" >> "$GITHUB_ENV"
env:
CC_PR: ${{ inputs.override_pr }}
CC_SHA: ${{ inputs.override_commit }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_NUMBER: ${{ github.event.number }}
GITHUB_EVENT_PULL_REQUEST_HEAD_SHA: ${{ github.event.pull_request.head.sha }}
- name: Upload coverage to Codecov
run: ${GITHUB_ACTION_PATH}/dist/codecov.sh
shell: bash
working-directory: ${{ inputs.working-directory }}
env:
CC_BASE_SHA: ${{ inputs.base_sha }}
CC_BINARY: ${{ inputs.binary }}
CC_BUILD: ${{ inputs.override_build }}
CC_BUILD_URL: ${{ inputs.override_build_url }}
CC_CODE: ${{ inputs.report_code }}
CC_DIR: ${{ inputs.directory }}
CC_DISABLE_FILE_FIXES: ${{ inputs.disable_file_fixes }}
CC_DISABLE_SEARCH: ${{ inputs.disable_search }}
CC_DISABLE_TELEM: ${{ inputs.disable_telem }}
CC_DRY_RUN: ${{ inputs.dry_run }}
CC_ENTERPRISE_URL: ${{ inputs.url }}
CC_ENV: ${{ inputs.env_vars }}
CC_EXCLUDES: ${{ inputs.exclude }}
CC_FAIL_ON_ERROR: ${{ inputs.fail_ci_if_error }}
CC_FILES: ${{ inputs.files }}
CC_FLAGS: ${{ inputs.flags }}
CC_FORCE: ${{ inputs.force }}
CC_GCOV_ARGS: ${{ inputs.gcov_args }}
CC_GCOV_EXECUTABLE: ${{ inputs.gcov_executable }}
CC_GCOV_IGNORE: ${{ inputs.gcov_ignore }}
CC_GCOV_INCLUDE: ${{ inputs.gcov_include }}
CC_GIT_SERVICE: ${{ inputs.git_service }}
CC_HANDLE_NO_REPORTS_FOUND: ${{ inputs.handle_no_reports_found }}
CC_JOB_CODE: ${{ inputs.job_code }}
CC_LEGACY: ${{ inputs.use_legacy_upload_endpoint }}
CC_NAME: ${{ inputs.name }}
CC_NETWORK_FILTER: ${{ inputs.network_filter }}
CC_NETWORK_PREFIX: ${{ inputs.network_prefix }}
CC_NETWORK_ROOT_FOLDER: ${{ inputs.root_dir }}
CC_OS: ${{ inputs.os }}
CC_PARENT_SHA: ${{ inputs.commit_parent }}
CC_PLUGINS: ${{ inputs.plugins }}
CC_RECURSE_SUBMODULES: ${{ inputs.recurse_submodules }}
CC_REPORT_TYPE: ${{ inputs.report_type }}
CC_RUN_CMD: ${{ inputs.run_command }}
CC_SERVICE: ${{ inputs.git_service }}
CC_SKIP_VALIDATION: ${{ inputs.skip_validation }}
CC_SLUG: ${{ inputs.slug }}
CC_SWIFT_PROJECT: ${{ inputs.swift_project }}
CC_USE_PYPI: ${{ inputs.use_pypi }}
CC_VERBOSE: ${{ inputs.verbose }}
CC_VERSION: ${{ inputs.version }}
CC_YML_PATH: ${{ inputs.codecov_yml_path }}
Action ID: marketplace/dineshsonachalam/markdown-autodocs
Author: Dinesh Sonachalam <dineshsonachalam@gmail.com>
Publisher: dineshsonachalam
Repository: github.com/dineshsonachalam/markdown-autodocs
A github action that automatically format markdown files, sync external docs/src code & make better docs.
| Name | Required | Description |
|---|---|---|
commit_author |
Optional | Value used for the commit author. Defaults to the username of whoever triggered this workflow run. Default: ${{ github.actor }} <${{ github.actor }}@users.noreply.github.com> |
commit_user_email |
Optional | Email address used for the commit user Default: actions@github.com |
commit_message |
Optional | Commit message Default: Apply automatic changes |
branch |
Optional | Git branch name, where changes should be pushed too. Default: ${{ github.head_ref }} |
output_file_paths |
Optional | Output markdown file paths. Default: [./README.md] |
categories |
Optional | Categories to automatically sync or transform its contents in the markdown files. Default: [code-block,json-to-html-table,workflow-artifact-table,markdown] |
name: Markdown autodocs
description: 'A github action that automatically format markdown files, sync external docs/src code & make better docs.'
author: Dinesh Sonachalam <dineshsonachalam@gmail.com>
inputs:
commit_author:
description: Value used for the commit author. Defaults to the username of whoever triggered this workflow run.
required: false
default: ${{ github.actor }} <${{ github.actor }}@users.noreply.github.com>
commit_user_email:
description: Email address used for the commit user
required: false
default: actions@github.com
commit_message:
description: Commit message
required: false
default: Apply automatic changes
branch:
description: Git branch name, where changes should be pushed too.
required: false
default: ${{ github.head_ref }}
output_file_paths:
description: Output markdown file paths.
required: false
default: '[./README.md]'
categories:
description: Categories to automatically sync or transform its contents in the markdown files.
required: false
default: '[code-block,json-to-html-table,workflow-artifact-table,markdown]'
runs:
using: composite
steps:
- run: wget https://raw.githubusercontent.com/dineshsonachalam/markdown-autodocs/master/action.py
shell: bash
- run: python3 action.py -repo '${{ github.repository }}' -access_token '${{ github.token }}' -commit_author '${{ inputs.commit_author }}' -commit_user_email '${{ inputs.commit_user_email }}' -commit_message '${{ inputs.commit_message }}' -branch '${{ inputs.branch }}' -output_file_paths '${{ inputs.output_file_paths }}' -categories '${{ inputs.categories }}'
shell: bash
branding:
icon: 'book'
color: blue
Action ID: marketplace/azure/kwok
Author: Unknown
Publisher: azure
Repository: github.com/azure/kwok
This action sets up kwok/kwokctl for use in your workflow
| Name | Required | Description |
|---|---|---|
command |
Required | Command to install |
kwok-version |
Optional | Specific version of command to install, defaults to latest release |
repository |
Optional | Repository is kwok's repository, will use release from this repository, defaults same as uses in this step Default: kubernetes-sigs/kwok |
name: Setup kwok
description: This action sets up kwok/kwokctl for use in your workflow
inputs:
command:
required: true
description: Command to install
kwok-version:
required: false
description: Specific version of command to install, defaults to latest release
repository:
required: false
description: Repository is kwok's repository, will use release from this repository, defaults same as uses in this step
default: "kubernetes-sigs/kwok"
runs:
using: composite
steps:
- name: Install ${{ inputs.command }}
shell: bash
env:
KWOK_REPO: ${{ inputs.repository }}
KWOK_VERSION: ${{ inputs.kwok-version }}
run: |
if [[ -f /usr/local/bin/${{ inputs.command }} ]]; then
echo "Found ${{ inputs.command }} in /usr/local/bin, skipping installation"
exit 0
fi
if [[ -z "${KWOK_VERSION}" ]]; then
echo "Fetching latest version..."
KWOK_VERSION="$(curl "https://api.github.com/repos/${KWOK_REPO}/releases/latest" | jq -r '.tag_name')"
if [[ -z "${KWOK_VERSION}" ]]; then
echo "Failed to fetch latest version"
exit 1
fi
if [[ "${KWOK_VERSION}" == "null" ]]; then
echo "Failed to fetch latest version"
exit 1
fi
echo "Latest version is ${KWOK_VERSION}"
fi
echo "Installing ${{ inputs.command }} ${KWOK_VERSION}..."
wget -O ${{ inputs.command }} "https://github.com/${KWOK_REPO}/releases/download/${KWOK_VERSION}/${{ inputs.command }}-$(go env GOOS)-$(go env GOARCH)"
chmod +x ${{ inputs.command }}
sudo mv ${{ inputs.command }} /usr/local/bin/
if ! ${{ inputs.command }} --version; then
echo "Failed to run ${{ inputs.command }} --version"
exit 1
fi
Action ID: marketplace/github/accessibility-alt-text-bot
Author: Unknown
Publisher: github
Repository: github.com/github/accessibility-alt-text-bot
This action will check a repos issue, discussion, or PR for correct alt text usage.
| Name | Required | Description |
|---|---|---|
config |
Optional | A custom linting configuration |
name: Accessibility alt text bot
description: "This action will check a repos issue, discussion, or PR for correct alt text usage."
branding:
icon: "eye"
color: "purple"
inputs:
config:
description: "A custom linting configuration"
required: false
runs:
using: "composite"
steps:
- name: Runs alt text check and adds comment
run: |
source ${{ github.action_path }}/queries.sh
if [ ${{ github.event.comment }} ]; then
content=$COMMENT
user=${{ github.event.comment.user.login }}
target_id=${{ github.event.comment.id }}
if ${{ github.event.issue.pull_request.url != '' }}; then
type=pr_comment
issue_url=${{ github.event.issue.html_url }}
bot_comment_id=$(gh api repos/${{ github.repository }}/issues/${{ github.event.issue.number }}/comments | jq -r '.[] | select(.user.login == "github-actions[bot]") | select(.body | test("<div alt-text-bot-id=\"${{ github.event.comment.id }}\"")) | .id')
elif ${{ github.event.discussion.id != '' }}; then
type=discussion_comment
discussion_node_id='${{ github.event.discussion.node_id }}'
comment_node_id='${{ github.event.comment.node_id }}'
bot_comment_id=$(gh api repos/${{ github.repository }}/discussions/${{ github.event.discussion.number }}/comments | jq -r '.[] | select(.user.login == "github-actions[bot]") | select(.body | test("<div alt-text-bot-id=\"${{ github.event.comment.id }}\"")) | .node_id')
if ${{ github.event.comment.parent_id != '' }}; then
reply_to_id=$(getDiscussionReplyToId $comment_node_id)
else
reply_to_id=$comment_node_id
fi
else
type=issue_comment
issue_url=${{ github.event.issue.html_url }}
bot_comment_id=$(gh api repos/${{ github.repository }}/issues/${{ github.event.issue.number }}/comments | jq -r '.[] | select(.user.login == "github-actions[bot]") | select(.body | test("<div alt-text-bot-id=\"${{ github.event.comment.id }}\"")) | .id')
fi
target=${{ github.event.comment.html_url }}
else
if [ ${{ github.event.issue }} ]; then
type=issue_description
content=$ISSUE_BODY
issue_url=${{ github.event.issue.html_url }}
user=${{ github.event.issue.user.login }}
target="your issue body"
target_id=${{ github.event.issue.id }}
bot_comment_id=$(gh api repos/${{ github.repository }}/issues/${{ github.event.issue.number }}/comments | jq -r '.[] | select(.user.login == "github-actions[bot]") | select(.body | test("<div alt-text-bot-id=\"${{ github.event.issue.id }}\"")) | .id')
elif [ ${{ github.event.pull_request }} ]; then
type=pr_description
content=$PR_BODY
issue_url=${{ github.event.pull_request.html_url }}
user=${{ github.event.pull_request.user.login }}
target="your pull request body"
target_id=${{ github.event.pull_request.id }}
bot_comment_id=$(gh api repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments | jq -r '.[] | select(.user.login == "github-actions[bot]") | select(.body | test("<div alt-text-bot-id=\"${{ github.event.pull_request.id }}\"")) | .id')
elif [ ${{ github.event.discussion }} ]; then
type=discussion_description
content=$DISCUSSION_BODY
discussion_node_id='${{ github.event.discussion.node_id }}'
user=${{ github.event.discussion.user.login }}
target="your discussion body"
target_id=${{ github.event.discussion.id }}
bot_comment_id=$(gh api repos/${{ github.repository }}/discussions/${{ github.event.discussion.number }}/comments | jq -r '.[] | select(.user.login == "github-actions[bot]") | select(.body | test("<div alt-text-bot-id=\"${{ github.event.discussion.id }}\"")) | .node_id')
fi
fi
flag=$(node ${{ github.action_path }}/src/index.js "$content" "$CONFIG")
custom_config_message="<div alt-text-bot-id=\"$target_id\" />
Uh oh! @$user, your markdown has a few linting errors. Check $target to fix the following violations:
$flag
> 🤖 Beep boop! This comment was added automatically by [github/accessibility-alt-text-bot](https://github.com/github/accessibility-alt-text-bot).
"
message="<div alt-text-bot-id=\"$target_id\" />
Uh oh! @$user, at least one image you shared is missing helpful alt text. Check $target to fix the following violations:
$flag
Alt text is an invisible description that helps screen readers describe images to blind or low-vision users. If you are using markdown to display images, add your alt text inside the brackets of the markdown image.
Learn more about alt text at [Basic writing and formatting syntax: images on GitHub Docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#images).
> 🤖 Beep boop! This comment was added automatically by [github/accessibility-alt-text-bot](https://github.com/github/accessibility-alt-text-bot).
"
echo "Config: $CONFIG"
if [ "$CONFIG" ]; then
message=$custom_config_message
fi
echo "Detected errors: ${flag}"
echo "Event type: $type"
if [[ $flag && ${{ github.event.action }} != 'deleted' ]]; then
if [ $bot_comment_id ]; then
if [[ $type = pr_comment ]] || [[ $type = pr_description ]]; then
gh api repos/${{ github.repository }}/issues/comments/$bot_comment_id -X PATCH -f body="$message"
elif [[ $type = issue_comment ]] || [[ $type = issue_description ]]; then
gh api repos/${{ github.repository }}/issues/comments/$bot_comment_id -X PATCH -f body="$message"
elif [[ $type = discussion_description ]] || [[ $type = discussion_comment ]]; then
gh api graphql -f query='mutation($commentId: ID!, $body: String!) { updateDiscussionComment(input: {commentId: $commentId, body: $body}) { comment { id body }}}' -f commentId=$bot_comment_id -f body="$message"
fi
else
if [[ $type = pr_comment ]] || [[ $type = pr_description ]]; then
gh pr comment $issue_url --body "$message"
elif [[ $type = issue_comment ]] || [[ $type = issue_description ]]; then
gh issue comment $issue_url --body "$message"
elif [[ $type = discussion_description ]]; then
addDiscussionComment $discussion_node_id "$message"
elif [[ $type = discussion_comment ]]; then
addDiscussionComment $discussion_node_id "$message" $reply_to_id
fi
fi
else
echo "bot_comment_id: $bot_comment_id"
if [ $bot_comment_id ]; then
echo "Deleting bot comment..."
if [[ $type = pr_comment ]] || [[ $type = pr_description ]]; then
gh api -X DELETE /repos/${{ github.repository }}/issues/comments/$bot_comment_id
elif [[ $type = issue_comment ]] || [[ $type = issue_description ]]; then
gh api -X DELETE /repos/${{ github.repository }}/issues/comments/$bot_comment_id
elif [[ $type = discussion_description ]] || [[ $type = discussion_comment ]]; then
gh api graphql -f query='mutation($id: ID!) { deleteDiscussionComment(input: {id: $id}) { clientMutationId } }' -f id=$bot_comment_id
fi
fi
fi
shell: bash
env:
GITHUB_TOKEN: ${{ github.token }}
COMMENT: ${{ github.event.comment.body }}
ISSUE_BODY: ${{ github.event.issue.body }}
PR_BODY: ${{ github.event.pull_request.body }}
DISCUSSION_BODY: ${{ github.event.discussion.body }}
CONFIG: ${{ inputs.config }}
Action ID: marketplace/amirisback/frogo-notification
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/frogo-notification
Library Easy Notification, Full and Clear Documentation,
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Notification'
description: 'Library Easy Notification, Full and Clear Documentation, '
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/Synapse-workspace-deployment
Author: Unknown
Publisher: azure
Repository: github.com/azure/Synapse-workspace-deployment
Use this GitHub action to deploy synapse workspace.
| Name | Required | Description |
|---|---|---|
TargetWorkspaceName |
Required | Provide the Synapse workspace name where you want to deploy the artifacts. |
TemplateFile |
Required | Specify the path to the workspace artifacts template. |
ParametersFile |
Required | Specify the path to the template parameter file. |
OverrideArmParameters |
Optional | Specify deployment parameter values. |
Environment |
Required | Provide the type of cloud environment. Valid values are: Azure Public, Azure China, Azure US Government, Azure Germany |
resourceGroup |
Required | Provide the resource group of the target Synapse workspace. |
clientId |
Optional | Provide client id of service principal. |
clientSecret |
Optional | Provide client secret of the service principal. |
subscriptionId |
Required | Provide subscription id. |
tenantId |
Optional | Provide tenant id. |
DeleteArtifactsNotInTemplate |
Optional | Delete the artifacts which are in the workspace but not in the template. |
managedIdentity |
Optional | Use managed identity to generate the bearer token? |
deployManagedPrivateEndpoint |
Optional | Deploy managed private endpoints in the template. |
FailOnMissingOverrides |
Optional | Mark the pipeline as failed if ARM overrides are missing. |
ArtifactsFolder |
Optional | Provide path to the root folder. |
operation |
Required | Provide name of the operation. |
npmpackage |
Optional | Source for the npm package. Only for dev testing. |
name: 'Synapse workspace deployment'
description: 'Use this GitHub action to deploy synapse workspace.'
inputs:
TargetWorkspaceName:
description: 'Provide the Synapse workspace name where you want to deploy the artifacts.'
required: true
TemplateFile:
description: 'Specify the path to the workspace artifacts template.'
required: true
ParametersFile:
description: 'Specify the path to the template parameter file.'
required: true
OverrideArmParameters:
description: 'Specify deployment parameter values.'
default: ''
required: false
Environment:
description: 'Provide the type of cloud environment. Valid values are: Azure Public, Azure China, Azure US Government, Azure Germany'
required: true
resourceGroup:
description: 'Provide the resource group of the target Synapse workspace.'
required: true
clientId:
description: 'Provide client id of service principal.'
required: false
clientSecret:
description: 'Provide client secret of the service principal.'
required: false
subscriptionId:
description: 'Provide subscription id.'
required: true
tenantId:
description: 'Provide tenant id.'
required: false
DeleteArtifactsNotInTemplate:
description: 'Delete the artifacts which are in the workspace but not in the template.'
required: false
managedIdentity:
description: 'Use managed identity to generate the bearer token?'
required: false
deployManagedPrivateEndpoint:
description: 'Deploy managed private endpoints in the template.'
required: false
FailOnMissingOverrides:
description: 'Mark the pipeline as failed if ARM overrides are missing.'
required: false
ArtifactsFolder:
description: 'Provide path to the root folder.'
required: false
operation:
description: 'Provide name of the operation.'
required: true
npmpackage:
description: 'Source for the npm package. Only for dev testing.'
required: false
runs:
using: 'node20'
main: './dist/index.js'
Action ID: marketplace/dflook/run-in-container
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/run-in-container
Run a command in a container
| Name | Required | Description |
|---|---|---|
image |
Required | The image to run the command in |
run |
Required | The command to run |
volumes |
Optional | Volumes to mount, one per line. Each line of the form 'source:target' |
name: Run in container
description: Run a command in a container
author: Daniel Flook
inputs:
image:
description: The image to run the command in
required: true
run:
description: The command to run
required: true
volumes:
description: Volumes to mount, one per line. Each line of the form 'source:target'
required: false
default: ""
runs:
using: composite
steps:
- name: Run command
env:
INPUT_VOLUMES: ${{ inputs.volumes }}
run: |
docker pull --quiet ${{ inputs.image }}
function run() {
docker run --rm \
--workdir /github/workspace \
-e "GITHUB_WORKSPACE=/github/workspace" \
-v $GITHUB_WORKSPACE:/github/workspace \
-e "HOME=/github/home" \
-v "$RUNNER_TEMP/_github_home":"/github/home" \
-v "/var/run/docker.sock":"/var/run/docker.sock" \
--mount type=bind,source="$RUNNER_TEMP/run.sh",target=/run.sh,readonly \
-e GITHUB_EVENT_PATH \
--mount type=bind,source="$GITHUB_EVENT_PATH",target="$GITHUB_EVENT_PATH,readonly" \
-e GITHUB_PATH \
--mount type=bind,source="$GITHUB_PATH",target="$GITHUB_PATH" \
-e GITHUB_ENV \
--mount type=bind,source="$GITHUB_ENV",target="$GITHUB_ENV" \
-e GITHUB_STEP_SUMMARY \
--mount type=bind,source="$GITHUB_STEP_SUMMARY",target="$GITHUB_STEP_SUMMARY" \
-e GITHUB_TOKEN \
-e GITHUB_JOB \
-e GITHUB_REF \
-e GITHUB_SHA \
-e GITHUB_REPOSITORY \
-e GITHUB_REPOSITORY_OWNER \
-e GITHUB_RUN_ID \
-e GITHUB_RUN_NUMBER \
-e GITHUB_RETENTION_DAYS \
-e GITHUB_RUN_ATTEMPT \
-e GITHUB_ACTOR \
-e GITHUB_WORKFLOW \
-e GITHUB_HEAD_REF \
-e GITHUB_BASE_REF \
-e GITHUB_EVENT_NAME \
-e GITHUB_SERVER_URL \
-e GITHUB_API_URL \
-e GITHUB_GRAPHQL_URL \
-e GITHUB_ACTION \
-e GITHUB_ACTION_REPOSITORY \
-e GITHUB_ACTION_REF \
-e RUNNER_DEBUG \
-e RUNNER_OS \
-e RUNNER_NAME \
-e RUNNER_TOOL_CACHE \
-e ACTIONS_RUNTIME_URL \
-e ACTIONS_RUNTIME_TOKEN \
-e ACTIONS_CACHE_URL \
-e GITHUB_ACTIONS \
-e CI \
-e GITHUB_ACTOR_ID \
-e GITHUB_OUTPUT \
-e GITHUB_REF_NAME \
-e GITHUB_REF_PROTECTED \
-e GITHUB_REF_TYPE \
-e GITHUB_REPOSITORY_ID \
-e GITHUB_REPOSITORY_OWNER_ID \
-e GITHUB_TRIGGERING_ACTOR \
-e GITHUB_WORKFLOW_REF \
-e GITHUB_WORKFLOW_SHA \
$VOLUMES_ARGS \
--entrypoint /bin/bash \
${{ inputs.image }} \
--noprofile --norc -eo pipefail /run.sh
}
function fix_owners() {
cat <<"EOF" >"$RUNNER_TEMP/run.sh"
chown -R --reference "$GITHUB_WORKSPACE" "$GITHUB_WORKSPACE/" || true
chown -R --reference "/github/home" "/github/home/" || true
chown --reference "$GITHUB_WORKSPACE" $GITHUB_PATH || true
chown --reference "$GITHUB_WORKSPACE" $GITHUB_ENV || true
chown --reference "$GITHUB_WORKSPACE" $GITHUB_STEP_SUMMARY || true
EOF
VOLUMES_ARGS=""
run
rm -f "$RUNNER_TEMP/run.sh"
}
trap fix_owners EXIT
VOLUMES_ARGS=""
if [[ -n "$INPUT_VOLUMES" ]]; then
for mount in $(echo "$INPUT_VOLUMES" | tr ',' '\n'); do
VOLUMES_ARGS="$VOLUMES_ARGS -v $mount"
done
fi
cat <<"EOF" >"$RUNNER_TEMP/run.sh"
${{ inputs.run }}
EOF
set -x
run
set +x
shell: bash
branding:
icon: globe
color: purple
Action ID: marketplace/wei/app-stats-action
Author: Unknown
Publisher: wei
Repository: github.com/wei/app-stats-action
Retrieve statistics for a GitHub App
| Name | Required | Description |
|---|---|---|
id |
Required | App ID |
private_key |
Required | contents of the app's *.pem private key file. |
| Name | Description |
|---|---|
installations |
Number of installations |
repositories |
Number of repositories |
suspended_installations |
Number of suspended installations |
popular_repositories |
JSON string for user/organization login and total number of stars of public repositories the app is installed on |
name: GitHub App Statistics
description: "Retrieve statistics for a GitHub App"
branding:
icon: "bar-chart-2"
color: purple
inputs:
id:
description: "App ID"
required: true
private_key:
description: "contents of the app's *.pem private key file."
required: true
outputs:
installations:
description: "Number of installations"
repositories:
description: "Number of repositories"
suspended_installations:
description: "Number of suspended installations"
popular_repositories:
description: "JSON string for user/organization login and total number of stars of public repositories the app is installed on"
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/tgymnich/github-action-benchmark
Author: github-action-benchmark developers <https://github.com/benchmark-action>
Publisher: tgymnich
Repository: github.com/tgymnich/github-action-benchmark
Continuous Benchmark using GitHub pages as dash board for keeping performance
| Name | Required | Description |
|---|---|---|
name |
Required | Name of the benchmark. This value must be identical among all benchmarks Default: Benchmark |
tool |
Required | Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter" |
output-file-path |
Required | A path to file which contains the benchmark output |
gh-pages-branch |
Required | Branch for gh-pages Default: gh-pages |
gh-repository |
Optional | Url to an optional different repository to store benchmark results |
benchmark-data-dir-path |
Required | Path to directory which contains benchmark files on GitHub pages branch Default: dev/bench |
github-token |
Optional | GitHub API token to pull/push GitHub pages branch and deploy GitHub pages. For public repository, this must be personal access token for now. Please read README.md for more details |
ref |
Optional | optional Ref to use when finding commit |
auto-push |
Optional | Push GitHub Pages branch to remote automatically. This option requires github-token input |
skip-fetch-gh-pages |
Optional | Skip pulling GitHub Pages branch before generating an auto commit |
comment-always |
Optional | Leave a comment with benchmark result comparison. To enable this feature, github-token input must be given as well |
summary-always |
Optional | Leave a job summary with benchmark result comparison |
save-data-file |
Optional | Save the benchmark data to external file Default: True |
comment-on-alert |
Optional | Leave an alert comment when current benchmark result is worse than previous. Threshold is specified with alert-threshold input. To enable this feature, github-token input must be given as well |
alert-threshold |
Optional | Threshold which determines if an alert should happen or not. Percentage value such as "150%". For example, 150% means that an alert happens when current benchmark result is 1.5x worse than previous Default: 200% |
fail-on-alert |
Optional | Workflow fails when alert comment happens |
fail-threshold |
Optional | Threshold which determines if the current workflow fails. Format is the same as alert-threshold input. If this value is not specified, the same value as alert-threshold is used |
alert-comment-cc-users |
Optional | Comma separated GitHub user names which start with @ (e.g. "@foo,@bar"). They will be mentioned in commit comment for alert. |
external-data-json-path |
Optional | JSON data file for storing benchmark results. When this input is set, github-action-benchmark no longer uses Git branch to store data. Instead, it reads and appends benchmark data from/to the file. User must store the file anywhere |
max-items-in-chart |
Optional | Max data points in a benchmark chart to avoid making the chart too busy. Value must be unsigned integer. No limit by default |
name: 'Continuous Benchmark'
author: 'github-action-benchmark developers <https://github.com/benchmark-action>'
description: 'Continuous Benchmark using GitHub pages as dash board for keeping performance'
branding:
icon: 'fast-forward'
color: 'blue'
inputs:
name:
description: 'Name of the benchmark. This value must be identical among all benchmarks'
required: true
default: 'Benchmark'
tool:
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter"'
required: true
output-file-path:
description: 'A path to file which contains the benchmark output'
required: true
gh-pages-branch:
description: 'Branch for gh-pages'
required: true
default: 'gh-pages'
gh-repository:
description: 'Url to an optional different repository to store benchmark results'
required: false
benchmark-data-dir-path:
description: 'Path to directory which contains benchmark files on GitHub pages branch'
required: true
default: 'dev/bench'
github-token:
description: 'GitHub API token to pull/push GitHub pages branch and deploy GitHub pages. For public repository, this must be personal access token for now. Please read README.md for more details'
required: false
ref:
description: 'optional Ref to use when finding commit'
required: false
auto-push:
description: 'Push GitHub Pages branch to remote automatically. This option requires github-token input'
required: false
default: false
skip-fetch-gh-pages:
description: 'Skip pulling GitHub Pages branch before generating an auto commit'
required: false
default: false
comment-always:
description: 'Leave a comment with benchmark result comparison. To enable this feature, github-token input must be given as well'
required: false
default: false
summary-always:
description: 'Leave a job summary with benchmark result comparison'
required: false
default: false
save-data-file:
description: 'Save the benchmark data to external file'
required: false
default: true
comment-on-alert:
description: 'Leave an alert comment when current benchmark result is worse than previous. Threshold is specified with alert-threshold input. To enable this feature, github-token input must be given as well'
required: false
default: false
alert-threshold:
description: 'Threshold which determines if an alert should happen or not. Percentage value such as "150%". For example, 150% means that an alert happens when current benchmark result is 1.5x worse than previous'
required: false
default: '200%'
fail-on-alert:
description: 'Workflow fails when alert comment happens'
required: false
# Note: Set to false by default since this action does not push to remote by default. When workflow
# fails and auto-push is not set, there is no chance to push the result to remote.
default: false
fail-threshold:
description: 'Threshold which determines if the current workflow fails. Format is the same as alert-threshold input. If this value is not specified, the same value as alert-threshold is used'
required: false
alert-comment-cc-users:
description: 'Comma separated GitHub user names which start with @ (e.g. "@foo,@bar"). They will be mentioned in commit comment for alert.'
required: false
external-data-json-path:
description: 'JSON data file for storing benchmark results. When this input is set, github-action-benchmark no longer uses Git branch to store data. Instead, it reads and appends benchmark data from/to the file. User must store the file anywhere'
required: false
max-items-in-chart:
description: 'Max data points in a benchmark chart to avoid making the chart too busy. Value must be unsigned integer. No limit by default'
required: false
runs:
using: 'node20'
main: 'dist/src/index.js'
Action ID: marketplace/aws-actions/closed-issue-message
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/closed-issue-message
Add a default comment on all issues that get closed
| Name | Required | Description |
|---|---|---|
repo-token |
Required | Token for the repository. Can be passed in using {{ secrets.GITHUB_TOKEN }} |
message |
Required | The message you want to be commented whenever an issue is closed |
---
name: 'Closed Issue Message'
description: 'Add a default comment on all issues that get closed'
inputs:
repo-token:
description: 'Token for the repository. Can be passed in using {{ secrets.GITHUB_TOKEN }}'
required: true
message:
description: 'The message you want to be commented whenever an issue is closed'
required: true
runs:
using: 'node20'
main: 'entrypoint.js'
branding:
icon: 'message-square'
color: 'orange'
Action ID: marketplace/sobolevn/black
Author: Łukasz Langa and contributors to Black
Publisher: sobolevn
Repository: github.com/sobolevn/black
The uncompromising Python code formatter.
| Name | Required | Description |
|---|---|---|
options |
Optional | Options passed to Black. Use `black --help` to see available options. Default: '--check' Default: --check --diff |
src |
Optional | Source to run Black. Default: '.' Default: . |
black_args |
Optional | [DEPRECATED] Black input arguments. |
version |
Optional | Python Version specifier (PEP440) - e.g. "21.5b1" |
name: "Black"
description: "The uncompromising Python code formatter."
author: "Łukasz Langa and contributors to Black"
inputs:
options:
description:
"Options passed to Black. Use `black --help` to see available options. Default:
'--check'"
required: false
default: "--check --diff"
src:
description: "Source to run Black. Default: '.'"
required: false
default: "."
black_args:
description: "[DEPRECATED] Black input arguments."
required: false
default: ""
deprecationMessage:
"Input `with.black_args` is deprecated. Use `with.options` and `with.src` instead."
version:
description: 'Python Version specifier (PEP440) - e.g. "21.5b1"'
required: false
default: ""
branding:
color: "black"
icon: "check-circle"
runs:
using: composite
steps:
- run: |
# Exists since using github.action_path + path to main script doesn't work because bash
# interprets the backslashes in github.action_path (which are used when the runner OS
# is Windows) destroying the path to the target file.
#
# Also semicolons are necessary because I can't get the newlines to work
entrypoint="import sys;
import subprocess;
from pathlib import Path;
MAIN_SCRIPT = Path(r'${GITHUB_ACTION_PATH}') / 'action' / 'main.py';
proc = subprocess.run([sys.executable, str(MAIN_SCRIPT)]);
sys.exit(proc.returncode)
"
if [ "$RUNNER_OS" == "Windows" ]; then
echo $entrypoint | python
else
echo $entrypoint | python3
fi
env:
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
INPUT_OPTIONS: ${{ inputs.options }}
INPUT_SRC: ${{ inputs.src }}
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
INPUT_VERSION: ${{ inputs.version }}
pythonioencoding: utf-8
shell: bash
Action ID: marketplace/mheap/github-action-fail-at-weekend
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-fail-at-weekend
Fails all builds at the weekend
name: Fail at Weekend
description: Fails all builds at the weekend
runs:
using: node12
main: index.js
branding:
icon: alert-octagon
color: red
Action ID: marketplace/actions/setup-go
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-go
Setup a Go environment and add it to the PATH
| Name | Required | Description |
|---|---|---|
go-version |
Optional | The Go version to download (if necessary) and use. Supports semver spec and ranges. Be sure to enclose this option in single quotation marks. |
go-version-file |
Optional | Path to the go.mod, go.work, .go-version, or .tool-versions file. |
check-latest |
Optional | Set this option to true if you want the action to always check for the latest available version that satisfies the version spec |
token |
Optional | Used to pull Go distributions from go-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
cache |
Optional | Used to specify whether caching is needed. Set to true, if you'd like to enable caching. Default: True |
cache-dependency-path |
Optional | Used to specify the path to a dependency file - go.sum |
architecture |
Optional | Target architecture for Go to use. Examples: x86, x64. Will use system architecture by default. |
| Name | Description |
|---|---|
go-version |
The installed Go version. Useful when given a version range as input. |
cache-hit |
A boolean value to indicate if a cache was hit |
name: 'Setup Go environment'
description: 'Setup a Go environment and add it to the PATH'
author: 'GitHub'
inputs:
go-version:
description: 'The Go version to download (if necessary) and use. Supports semver spec and ranges. Be sure to enclose this option in single quotation marks.'
go-version-file:
description: 'Path to the go.mod, go.work, .go-version, or .tool-versions file.'
check-latest:
description: 'Set this option to true if you want the action to always check for the latest available version that satisfies the version spec'
default: false
token:
description: Used to pull Go distributions from go-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
cache:
description: Used to specify whether caching is needed. Set to true, if you'd like to enable caching.
default: true
cache-dependency-path:
description: 'Used to specify the path to a dependency file - go.sum'
architecture:
description: 'Target architecture for Go to use. Examples: x86, x64. Will use system architecture by default.'
outputs:
go-version:
description: 'The installed Go version. Useful when given a version range as input.'
cache-hit:
description: 'A boolean value to indicate if a cache was hit'
runs:
using: 'node20'
main: 'dist/setup/index.js'
post: 'dist/cache-save/index.js'
post-if: success()
Action ID: marketplace/tibdex/github-app-token
Author: Thibault Derousseaux <tibdex@gmail.com>
Publisher: tibdex
Repository: github.com/tibdex/github-app-token
Run a GitHub Action as a GitHub App instead of using secrets.GITHUB_TOKEN or a personal access token.
| Name | Required | Description |
|---|---|---|
app_id |
Required | ID of the GitHub App. |
github_api_url |
Optional | The API URL of the GitHub server. Default: ${{ github.api_url }} |
installation_retrieval_mode |
Optional | The mode used to retrieve the installation for which the token will be requested.
One of: - id: use the installation with the specified ID. - organization: use an organization installation (https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#get-an-organization-installation-for-the-authenticated-app). - repository: use a repository installation (https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#get-a-repository-installation-for-the-authenticated-app). - user: use a user installation (https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#get-a-user-installation-for-the-authenticated-app). Default: repository |
installation_retrieval_payload |
Optional | The payload used to retrieve the installation.
Examples for each retrieval mode: - id: 1337 - organization: github - repository: tibdex/github-app-token - user: tibdex Default: ${{ github.repository }} |
permissions |
Optional | The JSON-stringified permissions granted to the token. Defaults to all permissions granted to the GitHub app. See https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#create-an-installation-access-token-for-an-app's `permissions`. |
private_key |
Required | Private key of the GitHub App (can be Base64 encoded). |
repositories |
Optional | The JSON-stringified array of the full names of the repositories the token should have access to. Defaults to all repositories that the installation can access. See https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#create-an-installation-access-token-for-an-app's `repositories`. |
revoke |
Optional | Revoke the token at the end of the job. Default: True |
| Name | Description |
|---|---|
token |
An installation access token for the GitHub App. |
name: GitHub App token
author: Thibault Derousseaux <tibdex@gmail.com>
description: Run a GitHub Action as a GitHub App instead of using secrets.GITHUB_TOKEN or a personal access token.
inputs:
app_id:
description: ID of the GitHub App.
required: true
github_api_url:
description: The API URL of the GitHub server.
default: ${{ github.api_url }}
installation_retrieval_mode:
description: >-
The mode used to retrieve the installation for which the token will be requested.
One of:
- id: use the installation with the specified ID.
- organization: use an organization installation (https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#get-an-organization-installation-for-the-authenticated-app).
- repository: use a repository installation (https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#get-a-repository-installation-for-the-authenticated-app).
- user: use a user installation (https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#get-a-user-installation-for-the-authenticated-app).
default: repository
installation_retrieval_payload:
description: >-
The payload used to retrieve the installation.
Examples for each retrieval mode:
- id: 1337
- organization: github
- repository: tibdex/github-app-token
- user: tibdex
default: ${{ github.repository }}
permissions:
description: >-
The JSON-stringified permissions granted to the token.
Defaults to all permissions granted to the GitHub app.
See https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#create-an-installation-access-token-for-an-app's `permissions`.
private_key:
description: Private key of the GitHub App (can be Base64 encoded).
required: true
repositories:
description: >-
The JSON-stringified array of the full names of the repositories the token should have access to.
Defaults to all repositories that the installation can access.
See https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#create-an-installation-access-token-for-an-app's `repositories`.
revoke:
description: Revoke the token at the end of the job.
default: true
outputs:
token:
description: An installation access token for the GitHub App.
runs:
using: node20
main: dist/main/index.js
post: dist/post/index.js
branding:
icon: unlock
color: gray-dark
Action ID: marketplace/primer/comment-token-update
Author: Unknown
Publisher: primer
Repository: github.com/primer/comment-token-update
Replaces a token in an existing comment with provided input
| Name | Required | Description |
|---|---|---|
comment-token |
Required | Token to replace in comment text |
script |
Required | script to get output of |
name: 'Comment Token Update'
description: 'Replaces a token in an existing comment with provided input'
inputs:
comment-token:
description: 'Token to replace in comment text'
required: true
default: ''
script:
description: 'script to get output of'
required: true
default: ''
runs:
using: 'node12'
main: 'index.js'
Action ID: marketplace/samuelmeuli/action-electron-builder
Author: Samuel Meuli
Publisher: samuelmeuli
Repository: github.com/samuelmeuli/action-electron-builder
GitHub Action for building and releasing Electron apps
| Name | Required | Description |
|---|---|---|
github_token |
Required | GitHub authentication token |
mac_certs |
Optional | Base64-encoded code signing certificate for macOS |
mac_certs_password |
Optional | Password for decrypting `mac_certs` |
release |
Optional | Whether the app should be released after a successful build |
windows_certs |
Optional | Base64-encoded code signing certificate for Windows |
windows_certs_password |
Optional | Password for decrypting `windows_certs` |
package_root |
Optional | Directory where NPM/Yarn commands should be run Default: . |
build_script_name |
Optional | Name of the optional NPM build script which is executed before `electron-builder` Default: build |
skip_build |
Optional | Whether the action should execute the NPM build script before running `electron-builder` |
use_vue_cli |
Optional | Whether to run `electron-builder` using the Vue CLI plugin instead of calling the command directly |
args |
Optional | Other arguments to pass to the `electron-builder` command, e.g. configuration overrides |
max_attempts |
Optional | Maximum number of attempts for completing the build and release step Default: 1 |
app_root |
Optional | Directory where `electron-builder` commands should be run |
name: Electron Builder Action
author: Samuel Meuli
description: GitHub Action for building and releasing Electron apps
inputs:
github_token:
description: GitHub authentication token
required: true
mac_certs:
description: Base64-encoded code signing certificate for macOS
required: false
mac_certs_password:
description: Password for decrypting `mac_certs`
required: false
release:
description: Whether the app should be released after a successful build
required: false
default: false
windows_certs:
description: Base64-encoded code signing certificate for Windows
required: false
windows_certs_password:
description: Password for decrypting `windows_certs`
required: false
package_root:
description: Directory where NPM/Yarn commands should be run
required: false
default: "."
build_script_name:
description: Name of the optional NPM build script which is executed before `electron-builder`
required: false
default: build
skip_build:
description: Whether the action should execute the NPM build script before running `electron-builder`
required: false
default: false
use_vue_cli:
description: Whether to run `electron-builder` using the Vue CLI plugin instead of calling the command directly
required: false
default: false
args:
description: Other arguments to pass to the `electron-builder` command, e.g. configuration overrides
required: false
default: ""
max_attempts:
description: Maximum number of attempts for completing the build and release step
required: false
default: "1"
# Deprecated
app_root:
description: Directory where `electron-builder` commands should be run
required: false
runs:
using: node12
main: index.js
branding:
icon: upload-cloud
color: blue
Action ID: marketplace/raeperd/google-drive-download-action
Author: raeperd
Publisher: raeperd
Repository: github.com/raeperd/google-drive-download-action
Github action for downlaoding google drive files or folder using Drives: list API
| Name | Required | Description |
|---|---|---|
clientId |
Required | Client id of oauth2 client application |
clientSecret |
Required | Client secret of oauth2 client application |
redirectUri |
Required | Redirect uri of oauth2 client application |
credential_json |
Required | credential.json with refresh_token and scope https://www.googleapis.com/auth/drive.readonly |
q |
Required | Query string to search for files and folders |
path |
Optional | Path to download files default to working directory Default: ./ |
name: 'google-drive-download-action'
description: 'Github action for downlaoding google drive files or folder using Drives: list API'
author: 'raeperd'
branding:
icon: download
color: white
inputs:
clientId:
required: true
description: 'Client id of oauth2 client application'
clientSecret:
required: true
description: 'Client secret of oauth2 client application'
redirectUri:
required: true
description: 'Redirect uri of oauth2 client application'
credential_json:
required: true
description: 'credential.json with refresh_token and scope https://www.googleapis.com/auth/drive.readonly'
q:
required: true
description: 'Query string to search for files and folders'
path:
required: false
default: "./"
description: 'Path to download files default to working directory'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/mxschmitt/playwright-github-action
Author: Microsoft
Publisher: mxschmitt
Repository: github.com/mxschmitt/playwright-github-action
Run Playwright tests on GitHub Actions
name: 'Run Playwright tests'
description: 'Run Playwright tests on GitHub Actions'
author: 'Microsoft'
branding:
icon: 'play'
color: 'green'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/azure/pipelines
Author: Unknown
Publisher: azure
Repository: github.com/azure/pipelines
Trigger a run in Azure pipelines
| Name | Required | Description |
|---|---|---|
azure-devops-project-url |
Required | Fullyqualified URL to the Azure DevOps organization along with project name(eg, https://dev.azure.com/organization/project-name or https://server.example.com:8080/tfs/DefaultCollection/project-name) |
azure-pipeline-name |
Required | Name of the Azure Pipline to be triggered |
azure-pipeline-variables |
Optional | Set/Overwrite pipeline variables |
azure-devops-token |
Required | Paste personal access token of the user as value of secret variable:AZURE_DEVOPS_TOKEN |
name: 'Azure Pipelines Action'
description: 'Trigger a run in Azure pipelines'
inputs:
azure-devops-project-url:
description: 'Fullyqualified URL to the Azure DevOps organization along with project name(eg, https://dev.azure.com/organization/project-name or https://server.example.com:8080/tfs/DefaultCollection/project-name)'
required: true
azure-pipeline-name:
description: 'Name of the Azure Pipline to be triggered'
required: true
azure-pipeline-variables:
description: 'Set/Overwrite pipeline variables'
required: false
azure-devops-token:
description: 'Paste personal access token of the user as value of secret variable:AZURE_DEVOPS_TOKEN'
required: true
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/appleboy/docker-ecr-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/docker-ecr-action
Upload Docker Image to Amazon Elastic Container Registry (ECR)
| Name | Required | Description |
|---|---|---|
access_key |
Optional | amazon access key |
secret_key |
Optional | amazon secret access key |
registry |
Optional | docker registry |
region |
Optional | amazon region, defaults to us-east-1 Default: us-east-1 |
repo |
Optional | repository name for the image |
lifecycle_policy |
Optional | filename of ecr lifecycle json policy |
repository_policy |
Optional | filename of ecr repository json policy |
tags |
Optional | repository tag for the image, defaults to latest Default: latest |
dockerfile |
Optional | dockerfile to be used, defaults to Dockerfile Default: Dockerfile |
auth |
Optional | auth token for the registry |
context |
Optional | the context path to use, defaults to root of the git repo Default: . |
force_tag |
Optional | replace existing matched image tags |
insecure |
Optional | enable insecure communication to this registry |
mirror |
Optional | use a mirror registry instead of pulling images directly from the central Hub |
bip |
Optional | use for pass bridge ip |
custom_dns |
Optional | set custom dns servers for the container |
storage_driver |
Optional | supports aufs, overlay or vfs drivers |
cache_from |
Optional | images to consider as cache sources |
auto_tag |
Optional | default build tags |
daemon_off |
Optional | do not start the docker daemon |
name: 'Docker ECR'
description: 'Upload Docker Image to Amazon Elastic Container Registry (ECR)'
author: 'Bo-Yi Wu'
inputs:
access_key:
description: 'amazon access key'
secret_key:
description: 'amazon secret access key'
registry:
description: 'docker registry'
region:
description: 'amazon region, defaults to us-east-1'
default: 'us-east-1'
repo:
description: 'repository name for the image'
lifecycle_policy:
description: 'filename of ecr lifecycle json policy'
repository_policy:
description: 'filename of ecr repository json policy'
tags:
description: 'repository tag for the image, defaults to latest'
default: 'latest'
dockerfile:
description: 'dockerfile to be used, defaults to Dockerfile'
default: 'Dockerfile'
auth:
description: 'auth token for the registry'
context:
description: 'the context path to use, defaults to root of the git repo'
default: '.'
force_tag:
description: 'replace existing matched image tags'
insecure:
description: 'enable insecure communication to this registry'
mirror:
description: 'use a mirror registry instead of pulling images directly from the central Hub'
bip:
description: 'use for pass bridge ip'
custom_dns:
description: 'set custom dns servers for the container'
storage_driver:
description: 'supports aufs, overlay or vfs drivers'
cache_from:
description: 'images to consider as cache sources'
auto_tag:
description: 'default build tags'
daemon_off:
description: 'do not start the docker daemon'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'cloud'
color: 'orange'
Action ID: marketplace/szenius/notion-burndown
Author: Unknown
Publisher: szenius
Repository: github.com/szenius/notion-burndown
Generates Burndown Chart based on Scrum backlog in Notion database
| Name | Required | Description |
|---|---|---|
NOTION_KEY |
Required | Notion integration access token |
NOTION_DB_BACKLOG |
Required | Notion Database ID of Sprint Backlog |
NOTION_DB_SPRINT_SUMMARY |
Required | Notion Database ID of Sprint Summary |
NOTION_DB_DAILY_SUMMARY |
Required | Notion Database ID of Daily Summary |
NOTION_PROPERTY_SPRINT |
Required | Name of the property with the sprint number |
NOTION_PROPERTY_ESTIMATE |
Required | Name of the property with the estimate |
NOTION_PROPERTY_PATTERN_STATUS_EXCLUDE |
Required | Regex of the statuses of stories which are done |
INCLUDE_WEEKENDS |
Optional | True if weekends should be included in the chart, false otherwise. Default: true |
SPRINT_START |
Optional | True if it is the start of a new sprint, false otherwise. When true, a new entry will be created in the Sprint Summary database, and the sprint end date will be 14 days later (sprint start day excluded). Note that new sprint summary will not be created if today is still in the middle of the last sprint. Default: false |
name: "Notion Burndown Chart"
description: "Generates Burndown Chart based on Scrum backlog in Notion database"
branding:
icon: bar-chart-2
color: green
inputs:
NOTION_KEY:
description: "Notion integration access token"
required: true
NOTION_DB_BACKLOG:
description: "Notion Database ID of Sprint Backlog"
required: true
NOTION_DB_SPRINT_SUMMARY:
description: "Notion Database ID of Sprint Summary"
required: true
NOTION_DB_DAILY_SUMMARY:
description: "Notion Database ID of Daily Summary"
required: true
NOTION_PROPERTY_SPRINT:
description: "Name of the property with the sprint number"
required: true
NOTION_PROPERTY_ESTIMATE:
description: "Name of the property with the estimate"
required: true
NOTION_PROPERTY_PATTERN_STATUS_EXCLUDE:
description: "Regex of the statuses of stories which are done"
required: true
INCLUDE_WEEKENDS:
description: "True if weekends should be included in the chart, false otherwise."
required: false
default: "true"
SPRINT_START:
description: "True if it is the start of a new sprint, false otherwise. When true, a new entry will be created in the Sprint Summary database, and the sprint end date will be 14 days later (sprint start day excluded). Note that new sprint summary will not be created if today is still in the middle of the last sprint."
required: false
default: "false"
runs:
using: "node16"
main: "dist/index.js"
Action ID: marketplace/azure/aml-compute
Author: azure/gh-aml
Publisher: azure
Repository: github.com/azure/aml-compute
Connect to or create a Compute Target to your Azure Machine Learning Workspace with this GitHub Action
| Name | Required | Description |
|---|---|---|
azure_credentials |
Required | Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS |
parameters_file |
Required | JSON file including the parameters of the compute. Default: compute.json |
name: "Azure Machine Learning Compute Action"
description: "Connect to or create a Compute Target to your Azure Machine Learning Workspace with this GitHub Action"
author: "azure/gh-aml"
inputs:
azure_credentials:
description: "Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS"
required: true
parameters_file:
description: "JSON file including the parameters of the compute."
required: true
default: "compute.json"
branding:
icon: "chevron-up"
color: "blue"
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/appleboy/gosec
Author: @ccojocar
Publisher: appleboy
Repository: github.com/appleboy/gosec
Runs the gosec security checker
| Name | Required | Description |
|---|---|---|
args |
Required | Arguments for gosec Default: -h |
name: "Gosec Security Checker"
description: "Runs the gosec security checker"
author: "@ccojocar"
inputs:
args:
description: "Arguments for gosec"
required: true
default: "-h"
runs:
using: "docker"
image: "docker://securego/gosec:2.22.10"
args:
- ${{ inputs.args }}
branding:
icon: "shield"
color: "blue"
Action ID: marketplace/stefanzweifel/phpinsights-action
Author: Stefan Zweifel <hello@stefanzweifel.io>
Publisher: stefanzweifel
Repository: github.com/stefanzweifel/phpinsights-action
Run PHP Insights
name: PHP Insights Action
description: Run PHP Insights
author: Stefan Zweifel <hello@stefanzweifel.io>
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: bar-chart-2
color: blue
Action ID: marketplace/lukka/get-cmake
Author: Luca Cappa https://github.com/lukka
Publisher: lukka
Repository: github.com/lukka/get-cmake
Installs CMake and Ninja, and caches them on cloud based GitHub cache, and/or on the local GitHub runner cache.
| Name | Required | Description |
|---|---|---|
cmakeVersion |
Optional | Optional CMake version, expressed with the semantic version syntax, e.g. '~3.25.0' for the most recent 3.25.x, `ˆ3.25.0` for the most recent 3.x version, or a specific version `3.25.2'. Or `latest` or `latestrc` for the latest stable or release candidate version. If not specified the `latest` is installed. |
ninjaVersion |
Optional | Optional Ninja version, same syntax as `cmakeVersion` input. If not specified, `latest` is installed |
useCloudCache |
Optional | Optional argument indicating whether to use the cloud based storage of the GitHub cache. Suited for the GitHub-hosted runners. Default: True |
useLocalCache |
Optional | Optional argument indicating whether to use the local cache on the GitHub runner file system. Suited for the self-hosted GitHub runners. |
| Name | Description |
|---|---|
cmake-path |
The path to the installed CMake executable |
ninja-path |
The path to the installed Ninja executable |
# Copyright (c) 2020, 2021, 2022, 2023-2025 Luca Cappa
# Released under the term specified in file LICENSE.txt
# SPDX short identifier: MIT
name: 'get-cmake'
description: 'Installs CMake and Ninja, and caches them on cloud based GitHub cache, and/or on the local GitHub runner cache.'
author: 'Luca Cappa https://github.com/lukka'
runs:
using: 'node20'
main: 'dist/index.js'
inputs:
cmakeVersion:
required: false
description: "Optional CMake version, expressed with the semantic version syntax, e.g. '~3.25.0' for the most recent 3.25.x, `ˆ3.25.0` for the most recent 3.x version, or a specific version `3.25.2'. Or `latest` or `latestrc` for the latest stable or release candidate version. If not specified the `latest` is installed."
ninjaVersion:
required: false
description: "Optional Ninja version, same syntax as `cmakeVersion` input. If not specified, `latest` is installed"
useCloudCache:
required: false
description: "Optional argument indicating whether to use the cloud based storage of the GitHub cache. Suited for the GitHub-hosted runners."
default: true
useLocalCache:
required: false
description: "Optional argument indicating whether to use the local cache on the GitHub runner file system. Suited for the self-hosted GitHub runners."
default: false
outputs:
cmake-path:
description: 'The path to the installed CMake executable'
ninja-path:
description: 'The path to the installed Ninja executable'
branding:
icon: 'terminal'
color: 'green'
Action ID: marketplace/pypa/cibuildwheel
Author: Unknown
Publisher: pypa
Repository: github.com/pypa/cibuildwheel
Installs and runs cibuildwheel on the current runner
| Name | Required | Description |
|---|---|---|
package-dir |
Optional | Input directory, defaults to "." Default: . |
output-dir |
Optional | Folder to place the outputs in, defaults to "wheelhouse" Default: wheelhouse |
config-file |
Optional | File containing the config, defaults to {package}/pyproject.toml |
only |
Optional | Build a specific wheel only. No need for arch/platform if this is set |
extras |
Optional | Comma-separated list of extras to install |
name: cibuildwheel
description: 'Installs and runs cibuildwheel on the current runner'
inputs:
package-dir:
description: 'Input directory, defaults to "."'
required: false
default: .
output-dir:
description: 'Folder to place the outputs in, defaults to "wheelhouse"'
required: false
default: wheelhouse
config-file:
description: 'File containing the config, defaults to {package}/pyproject.toml'
required: false
default: ''
only:
description: 'Build a specific wheel only. No need for arch/platform if this is set'
required: false
default: ''
extras:
description: 'Comma-separated list of extras to install'
required: false
default: ''
branding:
icon: package
color: yellow
runs:
using: composite
steps:
- uses: actions/setup-python@v6
id: python
with:
python-version: "3.11 - 3.13"
update-environment: false
- id: cibw
run: |
# Install cibuildwheel
"${{ steps.python.outputs.python-path }}" -u << "EOF"
import os
import shutil
import sys
import venv
from pathlib import Path
from subprocess import run
EXTRAS = set(e.strip() for e in "${{ inputs.extras }}".split(",") if e.strip())
if sys.platform == "linux":
EXTRAS.discard("uv")
class EnvBuilder(venv.EnvBuilder):
def __init__(self):
super().__init__()
def setup_scripts(self, context):
pass
def post_setup(self, context):
super().post_setup(context)
self.bin_path = Path(context.env_exe).parent
install_spec = r"${{ github.action_path }}"
if EXTRAS:
install_spec += f"[{','.join(sorted(EXTRAS))}]"
run([sys.executable, "-m", "pip", "--python", context.env_exe, "install", install_spec], check=True)
print("::group::Install cibuildwheel")
venv_path = Path(r"${{ runner.temp }}") / "cibw"
if venv_path.exists():
shutil.rmtree(venv_path)
builder = EnvBuilder()
builder.create(venv_path)
exposed_binaries = {"cibuildwheel"}
if "uv" in EXTRAS:
exposed_binaries.add("uv")
clean_bin_path = builder.bin_path.parent / f"{builder.bin_path.name}.clean"
clean_bin_path.mkdir()
for path in list(builder.bin_path.iterdir()):
if path.stem in exposed_binaries:
try:
os.symlink(path, clean_bin_path / path.name)
except OSError:
import shutil
shutil.copy2(path, clean_bin_path / path.name)
full_path = f"{clean_bin_path}{os.pathsep}{os.environ['PATH']}"
with open(os.environ["GITHUB_OUTPUT"], "at") as f:
f.write(f"updated-path={full_path}\n")
print("::endgroup::")
EOF
shell: bash
# Redirecting stderr to stdout to fix interleaving issue in Actions.
- run: >
cibuildwheel
"${{ inputs.package-dir }}"
${{ inputs.output-dir != '' && format('--output-dir "{0}"', inputs.output-dir) || ''}}
${{ inputs.config-file != '' && format('--config-file "{0}"', inputs.config-file) || ''}}
${{ inputs.only != '' && format('--only "{0}"', inputs.only) || ''}}
2>&1
env:
PATH: "${{ steps.cibw.outputs.updated-path }}"
shell: bash
if: runner.os != 'Windows'
# Windows needs powershell to interact nicely with Meson
- run: >
cibuildwheel
"${{ inputs.package-dir }}"
${{ inputs.output-dir != '' && format('--output-dir "{0}"', inputs.output-dir) || ''}}
${{ inputs.config-file != '' && format('--config-file "{0}"', inputs.config-file) || ''}}
${{ inputs.only != '' && format('--only "{0}"', inputs.only) || ''}}
env:
PATH: "${{ steps.cibw.outputs.updated-path }}"
shell: pwsh
if: runner.os == 'Windows'
Action ID: marketplace/sobolevn/trivy-action
Author: Aqua Security
Publisher: sobolevn
Repository: github.com/sobolevn/trivy-action
Scans container images for vulnerabilities with Trivy
| Name | Required | Description |
|---|---|---|
scan-type |
Optional | Scan type to use for scanning vulnerability Default: image |
image-ref |
Optional | image reference(for backward compatibility) |
input |
Optional | reference of tar file to scan |
scan-ref |
Optional | Scan reference Default: . |
exit-code |
Optional | exit code when vulnerabilities were found |
ignore-unfixed |
Optional | ignore unfixed vulnerabilities Default: false |
vuln-type |
Optional | comma-separated list of vulnerability types (os,library) Default: os,library |
severity |
Optional | severities of vulnerabilities to be displayed Default: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL |
format |
Optional | output format (table, json, template) Default: table |
template |
Optional | use an existing template for rendering output (@/contrib/gitlab.tpl, @/contrib/junit.tpl, @/contrib/html.tpl) |
output |
Optional | writes results to a file with the specified file name |
skip-dirs |
Optional | comma separated list of directories where traversal is skipped |
skip-files |
Optional | comma separated list of files to be skipped |
cache-dir |
Optional | specify where the cache is stored |
timeout |
Optional | timeout (default 5m0s) |
ignore-policy |
Optional | filter vulnerabilities with OPA rego language |
hide-progress |
Optional | suppress progress bar and log output |
list-all-pkgs |
Optional | output all packages regardless of vulnerability Default: false |
scanners |
Optional | comma-separated list of what security issues to detect |
trivyignores |
Optional | comma-separated list of relative paths in repository to one or more .trivyignore files |
artifact-type |
Optional | input artifact type (image, fs, repo, archive) for SBOM generation |
github-pat |
Optional | GitHub Personal Access Token (PAT) for submitting SBOM to GitHub Dependency Snapshot API |
trivy-config |
Optional | path to trivy.yaml config |
tf-vars |
Optional | path to terraform tfvars file |
limit-severities-for-sarif |
Optional | limit severities for SARIF format |
docker-host |
Optional | unix domain socket path to use for docker scanning, ex. unix:///var/run/docker.sock |
name: 'Aqua Security Trivy'
description: 'Scans container images for vulnerabilities with Trivy'
author: 'Aqua Security'
inputs:
scan-type:
description: 'Scan type to use for scanning vulnerability'
required: false
default: 'image'
image-ref:
description: 'image reference(for backward compatibility)'
required: false
input:
description: 'reference of tar file to scan'
required: false
default: ''
scan-ref:
description: 'Scan reference'
required: false
default: '.'
exit-code:
description: 'exit code when vulnerabilities were found'
required: false
ignore-unfixed:
description: 'ignore unfixed vulnerabilities'
required: false
default: 'false'
vuln-type:
description: 'comma-separated list of vulnerability types (os,library)'
required: false
default: 'os,library'
severity:
description: 'severities of vulnerabilities to be displayed'
required: false
default: 'UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL'
format:
description: 'output format (table, json, template)'
required: false
default: 'table'
template:
description: 'use an existing template for rendering output (@/contrib/gitlab.tpl, @/contrib/junit.tpl, @/contrib/html.tpl)'
required: false
default: ''
output:
description: 'writes results to a file with the specified file name'
required: false
default: ''
skip-dirs:
description: 'comma separated list of directories where traversal is skipped'
required: false
default: ''
skip-files:
description: 'comma separated list of files to be skipped'
required: false
default: ''
cache-dir:
description: 'specify where the cache is stored'
required: false
default: ''
timeout:
description: 'timeout (default 5m0s)'
required: false
default: ''
ignore-policy:
description: 'filter vulnerabilities with OPA rego language'
required: false
default: ''
hide-progress:
description: 'suppress progress bar and log output'
required: false
list-all-pkgs:
description: 'output all packages regardless of vulnerability'
required: false
default: 'false'
scanners:
description: 'comma-separated list of what security issues to detect'
required: false
default: ''
trivyignores:
description: 'comma-separated list of relative paths in repository to one or more .trivyignore files'
required: false
default: ''
artifact-type:
description: 'input artifact type (image, fs, repo, archive) for SBOM generation'
required: false
github-pat:
description: 'GitHub Personal Access Token (PAT) for submitting SBOM to GitHub Dependency Snapshot API'
required: false
trivy-config:
description: 'path to trivy.yaml config'
required: false
tf-vars:
description: "path to terraform tfvars file"
required: false
limit-severities-for-sarif:
description: 'limit severities for SARIF format'
required: false
docker-host:
description: 'unix domain socket path to use for docker scanning, ex. unix:///var/run/docker.sock'
required: false
runs:
using: 'docker'
image: "Dockerfile"
args:
- '-a ${{ inputs.scan-type }}'
- '-b ${{ inputs.format }}'
- '-c ${{ inputs.template }}'
- '-d ${{ inputs.exit-code }}'
- '-e ${{ inputs.ignore-unfixed }}'
- '-f ${{ inputs.vuln-type }}'
- '-g ${{ inputs.severity }}'
- '-h ${{ inputs.output }}'
- '-i ${{ inputs.image-ref }}'
- '-j ${{ inputs.scan-ref }}'
- '-k ${{ inputs.skip-dirs }}'
- '-l ${{ inputs.input }}'
- '-m ${{ inputs.cache-dir }}'
- '-n ${{ inputs.timeout }}'
- '-o ${{ inputs.ignore-policy }}'
- '-p ${{ inputs.hide-progress }}'
- '-q ${{ inputs.skip-files }}'
- '-r ${{ inputs.list-all-pkgs }}'
- '-s ${{ inputs.scanners }}'
- '-t ${{ inputs.trivyignores }}'
- '-u ${{ inputs.github-pat }}'
- '-v ${{ inputs.trivy-config }}'
- '-x ${{ inputs.tf-vars }}'
- '-z ${{ inputs.limit-severities-for-sarif }}'
- '-y ${{ inputs.docker-host }}'
Action ID: internal/contoso/deploy-kubernetes
Author: Platform Engineering Team
Publisher: internal
Deploy containerized applications to Kubernetes clusters with built-in health checks and rollback support
| Name | Required | Description |
|---|---|---|
environment |
Required | Target environment (dev, staging, prod) |
image-tag |
Required | Docker image tag to deploy |
namespace |
Optional | Kubernetes namespace Default: default |
replicas |
Optional | Number of pod replicas Default: 3 |
health-check-timeout |
Optional | Health check timeout in seconds Default: 300 |
rollback-on-failure |
Optional | Automatically rollback on deployment failure (true/false) Default: true |
| Name | Description |
|---|---|
deployment-url |
URL of the deployed application |
deployment-version |
Deployed version identifier |
rollback-revision |
Previous revision number for rollback |
name: "Deploy to Kubernetes"
description: "Deploy containerized applications to Kubernetes clusters with built-in health checks and rollback support"
author: "Platform Engineering Team"
inputs:
environment:
description: "Target environment (dev, staging, prod)"
required: true
image-tag:
description: "Docker image tag to deploy"
required: true
namespace:
description: "Kubernetes namespace"
required: false
default: "default"
replicas:
description: "Number of pod replicas"
required: false
default: "3"
health-check-timeout:
description: "Health check timeout in seconds"
required: false
default: "300"
rollback-on-failure:
description: "Automatically rollback on deployment failure (true/false)"
required: false
default: "true"
outputs:
deployment-url:
description: "URL of the deployed application"
deployment-version:
description: "Deployed version identifier"
rollback-revision:
description: "Previous revision number for rollback"
runs:
using: "composite"
steps:
- name: Validate environment
shell: bash
run: |
if [[ ! "${{ inputs.environment }}" =~ ^(dev|staging|prod)$ ]]; then
echo "❌ Invalid environment: ${{ inputs.environment }}"
echo "Must be one of: dev, staging, prod"
exit 1
fi
echo "✅ Environment validated: ${{ inputs.environment }}"
- name: Setup kubectl
shell: bash
run: |
echo "🔧 Setting up kubectl..."
kubectl version --client
- name: Configure cluster access
shell: bash
run: |
echo "🔐 Configuring cluster access for ${{ inputs.environment }}..."
# Your cluster configuration logic here
# Example: aws eks update-kubeconfig --name my-cluster-${{ inputs.environment }}
- name: Deploy to Kubernetes
shell: bash
run: |
echo "🚀 Deploying image tag: ${{ inputs.image-tag }}"
echo "📦 Namespace: ${{ inputs.namespace }}"
echo "🔢 Replicas: ${{ inputs.replicas }}"
kubectl set image deployment/myapp \
myapp=myregistry.io/myapp:${{ inputs.image-tag }} \
-n ${{ inputs.namespace }}
kubectl scale deployment/myapp \
--replicas=${{ inputs.replicas }} \
-n ${{ inputs.namespace }}
- name: Wait for rollout
shell: bash
run: |
echo "⏳ Waiting for deployment to complete..."
kubectl rollout status deployment/myapp \
-n ${{ inputs.namespace }} \
--timeout=${{ inputs.health-check-timeout }}s
- name: Health check
shell: bash
run: |
echo "🏥 Running health checks..."
PODS=$(kubectl get pods -n ${{ inputs.namespace }} -l app=myapp -o jsonpath='{.items[*].metadata.name}')
for POD in $PODS; do
echo "Checking pod: $POD"
kubectl wait --for=condition=ready pod/$POD \
-n ${{ inputs.namespace }} \
--timeout=60s
done
echo "✅ All pods are healthy"
- name: Get deployment info
shell: bash
run: |
echo "📊 Deployment Information:"
kubectl get deployment myapp -n ${{ inputs.namespace }}
# Set outputs
echo "deployment-url=https://myapp-${{ inputs.environment }}.example.com" >> $GITHUB_OUTPUT
echo "deployment-version=${{ inputs.image-tag }}" >> $GITHUB_OUTPUT
REVISION=$(kubectl rollout history deployment/myapp -n ${{ inputs.namespace }} | tail -n 2 | head -n 1 | awk '{print $1}')
echo "rollback-revision=$REVISION" >> $GITHUB_OUTPUT
- name: Rollback on failure
if: failure() && inputs.rollback-on-failure == 'true'
shell: bash
run: |
echo "❌ Deployment failed! Rolling back..."
kubectl rollout undo deployment/myapp -n ${{ inputs.namespace }}
kubectl rollout status deployment/myapp -n ${{ inputs.namespace }}
echo "🔄 Rollback completed"
exit 1
Action ID: marketplace/appleboy/codegpt-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/codegpt-action
Github Action for CodeGPT
| Name | Required | Description |
|---|---|---|
openai_api_key |
Required | OpenAI API Key |
openai_base_url |
Optional | OpenAI Base URL |
openai_org_id |
Optional | OpenAI ORG ID |
openai_model |
Optional | OpenAI Model Default: gpt-3.5-turbo |
| Name | Description |
|---|---|
review |
Code review |
name: 'CodeGPT'
description: 'Github Action for CodeGPT'
author: 'Bo-Yi Wu'
inputs:
openai_api_key: # Open AI API Key
description: 'OpenAI API Key'
required: true
default: ''
openai_base_url: # Open AI Base URL
description: 'OpenAI Base URL'
required: false
openai_org_id: # Open AI ORG ID
description: 'OpenAI ORG ID'
required: false
default: ''
openai_model: # Open AI Model
description: 'OpenAI Model'
required: false
default: 'gpt-3.5-turbo'
outputs:
review: # Code review
description: 'Code review'
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/actions/container-action
Author: Your name or organization here
Publisher: actions
Repository: github.com/actions/container-action
Provide a description here
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Your input description here Default: World |
| Name | Description |
|---|---|
greeting |
Your output description here |
name: The name of your action here
description: Provide a description here
author: Your name or organization here
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: heart
color: red
# Define your inputs here.
inputs:
who-to-greet:
description: Your input description here
required: true
default: World
# Define your outputs here.
outputs:
greeting:
description: Your output description here
runs:
using: docker
image: Dockerfile
env:
INPUT_WHO_TO_GREET: ${{ inputs.who-to-greet }}
Action ID: marketplace/amirisback/android-edit-text-spinner
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-edit-text-spinner
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/stefanzweifel/changelog-updater-action
Author: Stefan Zweifel <stefan@stefanzweifel.dev>
Publisher: stefanzweifel
Repository: github.com/stefanzweifel/changelog-updater-action
Automatically update a CHANGELOG with the latest release notes.
| Name | Required | Description |
|---|---|---|
release-notes |
Optional | The release notes you want to add to your CHANGELOG. Should be markdown. If not provided, the content between the Unreleased heading and the previous release heading is taken as release notes. |
latest-version |
Required | Semantic version number of the latest release. The value will be used as a heading text. |
release-date |
Optional | The date the latest version has been released. Defaults to the current date. |
path-to-changelog |
Optional | Defaults to `CHANGELOG.md`. Path to CHANGELOG.md file. Default: CHANGELOG.md |
compare-url-target-revision |
Optional | Target revision used in compare URL inside a possible "Unreleased" heading. Default: HEAD |
heading-text |
Optional | Text used in the new release heading. Defaults to the value from latest-version. |
hide-release-date |
Optional | Hide release date in the new release heading. |
parse-github-usernames |
Optional | Experimental: Find GitHub usernames in release notes and link to their profile. |
| Name | Description |
|---|---|
release_compare_url |
The generated compare URL for the just created relase. For example https://github.com/org/repo/compare/v1.0.0...v1.1.0 |
release_url_fragment |
The URL fragment for the just created release. For example '#v100---2021-02-01'. You can use this to generate URLs that point to the newly created release in your CHANGELOG. |
unreleased_compare_url |
The generated compare URL between the latest version and the target revision. For example https://github.com/org/repo/compare/v1.0.0...HEAD |
name: 'Changelog Updater'
description: 'Automatically update a CHANGELOG with the latest release notes.'
author: Stefan Zweifel <stefan@stefanzweifel.dev>
inputs:
release-notes:
required: false
description: The release notes you want to add to your CHANGELOG. Should be markdown. If not provided, the content between the Unreleased heading and the previous release heading is taken as release notes.
default: null
latest-version:
required: true
description: Semantic version number of the latest release. The value will be used as a heading text.
release-date:
required: false
description: The date the latest version has been released. Defaults to the current date.
default: null
path-to-changelog:
required: false
default: CHANGELOG.md
description: Defaults to `CHANGELOG.md`. Path to CHANGELOG.md file.
compare-url-target-revision:
required: false
default: HEAD
description: Target revision used in compare URL inside a possible "Unreleased" heading.
heading-text:
required: false
default: null
description: Text used in the new release heading. Defaults to the value from latest-version.
hide-release-date:
required: false
default: null
description: Hide release date in the new release heading.
parse-github-usernames:
required: false
default: null
description: "Experimental: Find GitHub usernames in release notes and link to their profile."
outputs:
release_compare_url:
description: The generated compare URL for the just created relase. For example https://github.com/org/repo/compare/v1.0.0...v1.1.0
release_url_fragment:
description: The URL fragment for the just created release. For example '#v100---2021-02-01'. You can use this to generate URLs that point to the newly created release in your CHANGELOG.
unreleased_compare_url:
description: The generated compare URL between the latest version and the target revision. For example https://github.com/org/repo/compare/v1.0.0...HEAD
branding:
icon: copy
color: purple
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.release-notes }}
- ${{ inputs.latest-version }}
- ${{ inputs.release-date }}
- ${{ inputs.path-to-changelog }}
- ${{ inputs.compare-url-target-revision }}
- ${{ inputs.heading-text }}
- ${{ inputs.hide-release-date }}
- ${{ inputs.parse-github-usernames }}
Action ID: marketplace/amirisback/consumable-code-the-sport-db-api
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/consumable-code-the-sport-db-api
Retrofit has been Handled, Consumable code for request Public API (TheSportDbApi)
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Consumable Code The Sport DB API'
description: 'Retrofit has been Handled, Consumable code for request Public API (TheSportDbApi)'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/kick-start-android-webview
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/kick-start-android-webview
Base Webview Project with Admob
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Base-Webview-Project'
description: 'Base Webview Project with Admob'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/peter-evans/close-fork-pulls
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/close-fork-pulls
A GitHub action to close pull requests from forks
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub auth token Default: ${{ github.token }} |
repository |
Optional | The GitHub repository containing the pull request Default: ${{ github.repository }} |
comment |
Optional | A comment to make on the pull requests before closing |
name: 'Close Fork Pulls'
description: 'A GitHub action to close pull requests from forks'
inputs:
token:
description: 'GitHub auth token'
default: ${{ github.token }}
repository:
description: 'The GitHub repository containing the pull request'
default: ${{ github.repository }}
comment:
description: 'A comment to make on the pull requests before closing'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'slash'
color: 'gray-dark'
Action ID: marketplace/actions/create-release
Author: GitHub
Publisher: actions
Repository: github.com/actions/create-release
Create a release for a tag in your repository
| Name | Required | Description |
|---|---|---|
tag_name |
Required | The name of the tag. This should come from the webhook payload, `github.GITHUB_REF` when a user pushes a new tag |
release_name |
Required | The name of the release. For example, `Release v1.0.1` |
body |
Optional | Text describing the contents of the tag. |
body_path |
Optional | Path to file with information about the tag. |
draft |
Optional | `true` to create a draft (unpublished) release, `false` to create a published one. Default: `false` |
prerelease |
Optional | `true` to identify the release as a prerelease. `false` to identify the release as a full release. Default: `false` |
commitish |
Optional | Any branch or commit SHA the Git tag is created from, unused if the Git tag already exists. Default: SHA of current commit |
owner |
Optional | Owner of the repository if it is not the current one |
repo |
Optional | Repository on which to release. Used only if you want to create the release on another repo |
| Name | Description |
|---|---|
id |
The ID of the created Release |
html_url |
The URL users can navigate to in order to view the release |
upload_url |
The URL for uploading assets to the release |
name: 'Create a Release'
description: 'Create a release for a tag in your repository'
author: 'GitHub'
inputs:
tag_name:
description: 'The name of the tag. This should come from the webhook payload, `github.GITHUB_REF` when a user pushes a new tag'
required: true
release_name:
description: 'The name of the release. For example, `Release v1.0.1`'
required: true
body:
description: 'Text describing the contents of the tag.'
required: false
body_path:
description: 'Path to file with information about the tag.'
required: false
draft:
description: '`true` to create a draft (unpublished) release, `false` to create a published one. Default: `false`'
required: false
default: false
prerelease:
description: '`true` to identify the release as a prerelease. `false` to identify the release as a full release. Default: `false`'
required: false
default: false
commitish:
description: 'Any branch or commit SHA the Git tag is created from, unused if the Git tag already exists. Default: SHA of current commit'
required: false
owner:
description: 'Owner of the repository if it is not the current one'
required: false
repo:
description: 'Repository on which to release. Used only if you want to create the release on another repo'
required: false
outputs:
id:
description: 'The ID of the created Release'
html_url:
description: 'The URL users can navigate to in order to view the release'
upload_url:
description: 'The URL for uploading assets to the release'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
icon: 'tag'
color: 'gray-dark'
Action ID: marketplace/amirisback/android-new-on-back-pressed
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-new-on-back-pressed
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/policy-compliance-scan
Author: Unknown
Publisher: azure
Repository: github.com/azure/policy-compliance-scan
Triggers compliance scan on Azure resources and passes/fails based on the compliance state of the resources
| Name | Required | Description |
|---|---|---|
scopes |
Required | Mandatory. Takes full identifier for one or more azure resources, resource groups or subscriptions. The on-demand policy compliance scan is triggered for all of these. The ID can generally be found in the properties section of the resource in Azure Portal. |
scopes-ignore |
Optional | Optional. Takes full identifier for one or more azure resources, resource groups(followed by /*). If the resources are found non-compliant after the scan completion, the action fails. However, in this input you can specify resources or resource groups for which the compliance state will be ignored. The action will pass irrespective of the compliance state of these resources. In case you want the action to always pass irrespective of the compliance state of resources, you can set its value as all. |
policy-assignments-ignore |
Optional | Optional. Takes full identifier for one or more policy assignments ids. If the resources are found non-compliant for given policy assignment after the scan completion, the action fails. However, in this input you can specify policy assignments ids for which the compliance state will be ignored. The action will pass irrespective of the compliance state of these policies. |
wait |
Optional | Optional. Depending on the breadth, the time taken for compliance scan can range from a few minutes to several hours. By default, the action will wait for the compliance scan to complete and succeed or fail based on the compliance state of resources. However, you can mark this input as false, in which case the action will trigger the compliance scan and succeed immediately. The status of the triggered scan and the compliance state of resources would have to be then viewed in activity log of the resource in Azure portal. Default: True |
skip-report |
Optional | Optional. Defaults to false. If false, the action will upload a CSV file containing a list of resources that are non-compliant after the triggered scan is complete. The CSV file can be downloaded as an artifact from the workflow run for manual analysis. Note that the number of rows in CSV are capped at 100,000. |
report-name |
Optional | Optional. The filename for the CSV to be uploaded. Ignored if skip-report is set to true. |
name: "Azure Policy Compliance Scan"
description: "Triggers compliance scan on Azure resources and passes/fails based on the compliance state of the resources"
inputs:
scopes:
description: "Mandatory. Takes full identifier for one or more azure resources, resource groups or subscriptions. The on-demand policy compliance scan is triggered for all of these. The ID can generally be found in the properties section of the resource in Azure Portal."
required: true
scopes-ignore:
description: "Optional. Takes full identifier for one or more azure resources, resource groups(followed by /*). If the resources are found non-compliant after the scan completion, the action fails. However, in this input you can specify resources or resource groups for which the compliance state will be ignored. The action will pass irrespective of the compliance state of these resources. In case you want the action to always pass irrespective of the compliance state of resources, you can set its value as all."
required: false
policy-assignments-ignore:
description: "Optional. Takes full identifier for one or more policy assignments ids. If the resources are found non-compliant for given policy assignment after the scan completion, the action fails. However, in this input you can specify policy assignments ids for which the compliance state will be ignored. The action will pass irrespective of the compliance state of these policies."
required: false
wait:
description: "Optional. Depending on the breadth, the time taken for compliance scan can range from a few minutes to several hours. By default, the action will wait for the compliance scan to complete and succeed or fail based on the compliance state of resources. However, you can mark this input as false, in which case the action will trigger the compliance scan and succeed immediately. The status of the triggered scan and the compliance state of resources would have to be then viewed in activity log of the resource in Azure portal."
required: false
default: true
skip-report:
description: "Optional. Defaults to false. If false, the action will upload a CSV file containing a list of resources that are non-compliant after the triggered scan is complete. The CSV file can be downloaded as an artifact from the workflow run for manual analysis. Note that the number of rows in CSV are capped at 100,000."
required: false
default: false
report-name:
description: "Optional. The filename for the CSV to be uploaded. Ignored if skip-report is set to true."
required: false
runs:
using: "node12"
main: "lib/run.js"
Action ID: marketplace/aws-actions/amazon-eks-fargate
Author: Michael Hausenblas, hausenb@amazon.com
Publisher: aws-actions
Repository: github.com/aws-actions/amazon-eks-fargate
Creates and EKS on Fargate cluster
name: 'EKS on Fargate'
description: 'Creates and EKS on Fargate cluster'
author: 'Michael Hausenblas, hausenb@amazon.com'
branding:
icon: 'cloud'
color: 'orange'
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/azure/load-testing
Author: Unknown
Publisher: azure
Repository: github.com/azure/load-testing
Automate continous regression testing with Azure Load Testing
| Name | Required | Description |
|---|---|---|
loadtestConfigFile |
Required | Path of the YAML file. Should be fully qualified path or relative to the default working directory |
loadtestResource |
Required | Enter or Select the name of an existing Azure Load Testing resource |
loadtestRunName |
Optional | Custom name for the load test run |
loadtestRunDescription |
Optional | Custom description for the load test run |
resourceGroup |
Required | Enter or Select the Azure Resource Group that contains the Load test resource specified above |
secrets |
Optional | Enter secrets in JSON |
env |
Optional | Enter env in JSON |
overrideParameters |
Optional | Override parameters in the YAML config file using the JSON format with testId, displayName, description, engineInstances, autoStop supported. |
outputVariableName |
Optional | Name of the output variable that stores the test run ID for use in subsequent tasks. |
name: 'Azure Load Testing'
description: 'Automate continous regression testing with Azure Load Testing'
inputs:
loadtestConfigFile:
description: 'Path of the YAML file. Should be fully qualified path or relative to the default working directory'
required: true
loadtestResource:
description: 'Enter or Select the name of an existing Azure Load Testing resource'
required: true
loadtestRunName:
description: 'Custom name for the load test run'
required: false
loadtestRunDescription:
description: 'Custom description for the load test run'
required: false
resourceGroup:
description: 'Enter or Select the Azure Resource Group that contains the Load test resource specified above'
required: true
secrets:
description: 'Enter secrets in JSON'
required: false
env:
description: 'Enter env in JSON'
required: false
overrideParameters:
description: 'Override parameters in the YAML config file using the JSON format with testId, displayName, description, engineInstances, autoStop supported.'
required: false
outputVariableName:
description: 'Name of the output variable that stores the test run ID for use in subsequent tasks.'
required: false
branding:
icon: 'extension-icon.svg'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main.js'
post: 'lib/postProcessJob.js'
Action ID: marketplace/mheap/github-action-csharp-example
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-csharp-example
Show available data for any Action run
name: C# Example Debug
description: Show available data for any Action run
runs:
using: docker
image: Dockerfile
Action ID: marketplace/appleboy/build-push-action
Author: docker
Publisher: appleboy
Repository: github.com/appleboy/build-push-action
Build and push Docker images with Buildx
| Name | Required | Description |
|---|---|---|
add-hosts |
Optional | List of a customs host-to-IP mapping (e.g., docker:10.180.0.1) |
allow |
Optional | List of extra privileged entitlement (e.g., network.host,security.insecure) |
attests |
Optional | List of attestation parameters (e.g., type=sbom,generator=image) |
build-args |
Optional | List of build-time variables |
build-contexts |
Optional | List of additional build contexts (e.g., name=path) |
builder |
Optional | Builder instance |
cache-from |
Optional | List of external cache sources for buildx (e.g., user/app:cache, type=local,src=path/to/dir) |
cache-to |
Optional | List of cache export destinations for buildx (e.g., user/app:cache, type=local,dest=path/to/dir) |
cgroup-parent |
Optional | Optional parent cgroup for the container used in the build |
context |
Optional | Build's context is the set of files located in the specified PATH or URL |
file |
Optional | Path to the Dockerfile |
labels |
Optional | List of metadata for an image |
load |
Optional | Load is a shorthand for --output=type=docker Default: false |
network |
Optional | Set the networking mode for the RUN instructions during build |
no-cache |
Optional | Do not use cache when building the image Default: false |
no-cache-filters |
Optional | Do not cache specified stages |
outputs |
Optional | List of output destinations (format: type=local,dest=path) |
platforms |
Optional | List of target platforms for build |
provenance |
Optional | Generate provenance attestation for the build (shorthand for --attest=type=provenance) |
pull |
Optional | Always attempt to pull all referenced images Default: false |
push |
Optional | Push is a shorthand for --output=type=registry Default: false |
insecure |
Optional | insecure Default: false |
sbom |
Optional | Generate SBOM attestation for the build (shorthand for --attest=type=sbom) |
secrets |
Optional | List of secrets to expose to the build (e.g., key=string, GIT_AUTH_TOKEN=mytoken) |
secret-files |
Optional | List of secret files to expose to the build (e.g., key=filename, MY_SECRET=./secret.txt) |
shm-size |
Optional | Size of /dev/shm (e.g., 2g) |
ssh |
Optional | List of SSH agent socket or keys to expose to the build |
tags |
Optional | List of tags |
target |
Optional | Sets the target stage to build |
ulimit |
Optional | Ulimit options (e.g., nofile=1024:1024) |
github-token |
Optional | GitHub Token used to authenticate against a repository for Git context Default: ${{ github.token }} |
| Name | Description |
|---|---|
imageid |
Image ID |
digest |
Image digest |
metadata |
Build result metadata |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: Build and push Docker images
description: Build and push Docker images with Buildx
author: docker
branding:
icon: 'anchor'
color: 'blue'
inputs:
add-hosts:
description: "List of a customs host-to-IP mapping (e.g., docker:10.180.0.1)"
required: false
allow:
description: "List of extra privileged entitlement (e.g., network.host,security.insecure)"
required: false
attests:
description: "List of attestation parameters (e.g., type=sbom,generator=image)"
required: false
build-args:
description: "List of build-time variables"
required: false
build-contexts:
description: "List of additional build contexts (e.g., name=path)"
required: false
builder:
description: "Builder instance"
required: false
cache-from:
description: "List of external cache sources for buildx (e.g., user/app:cache, type=local,src=path/to/dir)"
required: false
cache-to:
description: "List of cache export destinations for buildx (e.g., user/app:cache, type=local,dest=path/to/dir)"
required: false
cgroup-parent:
description: "Optional parent cgroup for the container used in the build"
required: false
context:
description: "Build's context is the set of files located in the specified PATH or URL"
required: false
file:
description: "Path to the Dockerfile"
required: false
labels:
description: "List of metadata for an image"
required: false
load:
description: "Load is a shorthand for --output=type=docker"
required: false
default: 'false'
network:
description: "Set the networking mode for the RUN instructions during build"
required: false
no-cache:
description: "Do not use cache when building the image"
required: false
default: 'false'
no-cache-filters:
description: "Do not cache specified stages"
required: false
outputs:
description: "List of output destinations (format: type=local,dest=path)"
required: false
platforms:
description: "List of target platforms for build"
required: false
provenance:
description: "Generate provenance attestation for the build (shorthand for --attest=type=provenance)"
required: false
pull:
description: "Always attempt to pull all referenced images"
required: false
default: 'false'
push:
description: "Push is a shorthand for --output=type=registry"
required: false
default: 'false'
insecure:
description: "insecure"
required: false
default: 'false'
sbom:
description: "Generate SBOM attestation for the build (shorthand for --attest=type=sbom)"
required: false
secrets:
description: "List of secrets to expose to the build (e.g., key=string, GIT_AUTH_TOKEN=mytoken)"
required: false
secret-files:
description: "List of secret files to expose to the build (e.g., key=filename, MY_SECRET=./secret.txt)"
required: false
shm-size:
description: "Size of /dev/shm (e.g., 2g)"
required: false
ssh:
description: "List of SSH agent socket or keys to expose to the build"
required: false
tags:
description: "List of tags"
required: false
target:
description: "Sets the target stage to build"
required: false
ulimit:
description: "Ulimit options (e.g., nofile=1024:1024)"
required: false
github-token:
description: "GitHub Token used to authenticate against a repository for Git context"
default: ${{ github.token }}
required: false
outputs:
imageid:
description: 'Image ID'
digest:
description: 'Image digest'
metadata:
description: 'Build result metadata'
runs:
using: 'node16'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/actions/checkout
Author: Unknown
Publisher: actions
Repository: github.com/actions/checkout
Checkout a Git repository at a particular version
| Name | Required | Description |
|---|---|---|
repository |
Optional | Repository name with owner. For example, actions/checkout Default: ${{ github.repository }} |
ref |
Optional | The branch, tag or SHA to checkout. When checking out the repository that triggered a workflow, this defaults to the reference or SHA for that event. Otherwise, uses the default branch. |
token |
Optional | Personal access token (PAT) used to fetch the repository. The PAT is configured with the local git config, which enables your scripts to run authenticated git commands. The post-job step removes the PAT.
We recommend using a service account with the least permissions necessary. Also when generating a new PAT, select the least scopes necessary.
[Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
Default: ${{ github.token }} |
ssh-key |
Optional | SSH key used to fetch the repository. The SSH key is configured with the local git config, which enables your scripts to run authenticated git commands. The post-job step removes the SSH key. We recommend using a service account with the least permissions necessary. [Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets) |
ssh-known-hosts |
Optional | Known hosts in addition to the user and global host key database. The public SSH keys for a host may be obtained using the utility `ssh-keyscan`. For example, `ssh-keyscan github.com`. The public key for github.com is always implicitly added. |
ssh-strict |
Optional | Whether to perform strict host key checking. When true, adds the options `StrictHostKeyChecking=yes` and `CheckHostIP=no` to the SSH command line. Use the input `ssh-known-hosts` to configure additional hosts.
Default: True |
ssh-user |
Optional | The user to use when connecting to the remote SSH host. By default 'git' is used.
Default: git |
persist-credentials |
Optional | Whether to configure the token or SSH key with the local git config Default: True |
path |
Optional | Relative path under $GITHUB_WORKSPACE to place the repository |
clean |
Optional | Whether to execute `git clean -ffdx && git reset --hard HEAD` before fetching Default: True |
filter |
Optional | Partially clone against a given filter. Overrides sparse-checkout if set. |
sparse-checkout |
Optional | Do a sparse checkout on given patterns. Each pattern should be separated with new lines. |
sparse-checkout-cone-mode |
Optional | Specifies whether to use cone-mode when doing a sparse checkout.
Default: True |
fetch-depth |
Optional | Number of commits to fetch. 0 indicates all history for all branches and tags. Default: 1 |
fetch-tags |
Optional | Whether to fetch tags, even if fetch-depth > 0. |
show-progress |
Optional | Whether to show progress status output when fetching. Default: True |
lfs |
Optional | Whether to download Git-LFS files |
submodules |
Optional | Whether to checkout submodules: `true` to checkout submodules or `recursive` to recursively checkout submodules. When the `ssh-key` input is not provided, SSH URLs beginning with `git@github.com:` are converted to HTTPS. |
set-safe-directory |
Optional | Add repository path as safe.directory for Git global config by running `git config --global --add safe.directory <path>` Default: True |
github-server-url |
Optional | The base URL for the GitHub instance that you are trying to clone from, will use environment defaults to fetch from the same instance that the workflow is running from unless specified. Example URLs are https://github.com or https://my-ghes-server.example.com |
| Name | Description |
|---|---|
ref |
The branch, tag or SHA that was checked out |
commit |
The commit SHA that was checked out |
name: 'Checkout'
description: 'Checkout a Git repository at a particular version'
inputs:
repository:
description: 'Repository name with owner. For example, actions/checkout'
default: ${{ github.repository }}
ref:
description: >
The branch, tag or SHA to checkout. When checking out the repository that
triggered a workflow, this defaults to the reference or SHA for that
event. Otherwise, uses the default branch.
token:
description: >
Personal access token (PAT) used to fetch the repository. The PAT is configured
with the local git config, which enables your scripts to run authenticated git
commands. The post-job step removes the PAT.
We recommend using a service account with the least permissions necessary.
Also when generating a new PAT, select the least scopes necessary.
[Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
default: ${{ github.token }}
ssh-key:
description: >
SSH key used to fetch the repository. The SSH key is configured with the local
git config, which enables your scripts to run authenticated git commands.
The post-job step removes the SSH key.
We recommend using a service account with the least permissions necessary.
[Learn more about creating and using
encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
ssh-known-hosts:
description: >
Known hosts in addition to the user and global host key database. The public
SSH keys for a host may be obtained using the utility `ssh-keyscan`. For example,
`ssh-keyscan github.com`. The public key for github.com is always implicitly added.
ssh-strict:
description: >
Whether to perform strict host key checking. When true, adds the options `StrictHostKeyChecking=yes`
and `CheckHostIP=no` to the SSH command line. Use the input `ssh-known-hosts` to
configure additional hosts.
default: true
ssh-user:
description: >
The user to use when connecting to the remote SSH host. By default 'git' is used.
default: git
persist-credentials:
description: 'Whether to configure the token or SSH key with the local git config'
default: true
path:
description: 'Relative path under $GITHUB_WORKSPACE to place the repository'
clean:
description: 'Whether to execute `git clean -ffdx && git reset --hard HEAD` before fetching'
default: true
filter:
description: >
Partially clone against a given filter.
Overrides sparse-checkout if set.
default: null
sparse-checkout:
description: >
Do a sparse checkout on given patterns.
Each pattern should be separated with new lines.
default: null
sparse-checkout-cone-mode:
description: >
Specifies whether to use cone-mode when doing a sparse checkout.
default: true
fetch-depth:
description: 'Number of commits to fetch. 0 indicates all history for all branches and tags.'
default: 1
fetch-tags:
description: 'Whether to fetch tags, even if fetch-depth > 0.'
default: false
show-progress:
description: 'Whether to show progress status output when fetching.'
default: true
lfs:
description: 'Whether to download Git-LFS files'
default: false
submodules:
description: >
Whether to checkout submodules: `true` to checkout submodules or `recursive` to
recursively checkout submodules.
When the `ssh-key` input is not provided, SSH URLs beginning with `git@github.com:` are
converted to HTTPS.
default: false
set-safe-directory:
description: Add repository path as safe.directory for Git global config by running `git config --global --add safe.directory <path>`
default: true
github-server-url:
description: The base URL for the GitHub instance that you are trying to clone from, will use environment defaults to fetch from the same instance that the workflow is running from unless specified. Example URLs are https://github.com or https://my-ghes-server.example.com
required: false
outputs:
ref:
description: 'The branch, tag or SHA that was checked out'
commit:
description: 'The commit SHA that was checked out'
runs:
using: node24
main: dist/index.js
post: dist/index.js
Action ID: marketplace/raeperd/line-in-action
Author: raeperd
Publisher: raeperd
Repository: github.com/raeperd/line-in-action
Github action for pushing LINE message using LINE-Notify
| Name | Required | Description |
|---|---|---|
token |
Required | An access token for authentication. Used for calling the notification API |
message |
Required | 1000 characters max |
notificationDisabled |
Optional | If true, user doesn't receive a push notification when the message is sent. If false, The user receives a push notification when the message is sent. If omitted, the value defaults to false. Default: false |
stickerPackageId |
Optional | Package ID. Details in https://developers.line.biz/media/messaging-api/sticker_list.pdf |
stickerId |
Optional | Sticker ID. Details in https://developers.line.biz/media/messaging-api/sticker_list.pdf |
name: 'LINE-in-action'
description: 'Github action for pushing LINE message using LINE-Notify'
author: 'raeperd'
branding:
icon: 'message-square'
color: 'green'
inputs:
token:
required: true
description: An access token for authentication. Used for calling the notification API
message:
required: true
description: 1000 characters max
notificationDisabled:
required: false
description: If true, user doesn't receive a push notification when the message is sent.
If false, The user receives a push notification when the message is sent.
If omitted, the value defaults to false.
default: 'false'
stickerPackageId:
required: false
description: Package ID. Details in https://developers.line.biz/media/messaging-api/sticker_list.pdf
stickerId:
required: false
description: Sticker ID. Details in https://developers.line.biz/media/messaging-api/sticker_list.pdf
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/srggrs/assign-one-project-github-action
Author: srggrs
Publisher: srggrs
Repository: github.com/srggrs/assign-one-project-github-action
Assign new/labeled Issue or Pull Request to a specific project dashboard column
| Name | Required | Description |
|---|---|---|
project |
Required | The url of the project to be assigned to. |
column_name |
Optional | The column name of the project, defaults to "To do" for issues and "In progress" for pull requests. |
# action.yml
name: 'Assign to One Project'
description: 'Assign new/labeled Issue or Pull Request to a specific project dashboard column'
author: srggrs
inputs:
project:
description: 'The url of the project to be assigned to.'
required: true
column_name:
description: 'The column name of the project, defaults to "To do" for issues and "In progress" for pull requests.'
required: false
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.project }}
- ${{ inputs.column_name }}
branding:
icon: 'box'
color: 'red'
Action ID: marketplace/google-github-actions/deploy-cloud-functions
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/deploy-cloud-functions
Use this action to deploy code to Google Cloud Functions.
| Name | Required | Description |
|---|---|---|
project_id |
Optional | ID of the Google Cloud project in which to deploy the service. The default value is computed from the environment. |
region |
Optional | Region in which the function should be deployed. Default: us-central1 |
universe |
Optional | The Google Cloud universe to use for constructing API endpoints. Trusted
Partner Cloud and Google Distributed Hosted Cloud should set this to their
universe address.
You can also override individual API endpoints by setting the environment
variable `GHA_ENDPOINT_OVERRIDE_<endpoint>` where `<endpoint>` is the API
endpoint to override. For example:
```yaml
env:
GHA_ENDPOINT_OVERRIDE_oauth2: 'https://oauth2.myapi.endpoint/v1'
```
For more information about universes, see the Google Cloud documentation. Default: googleapis.com |
name |
Required | Name of the Cloud Function. |
description |
Optional | Human-friendly description of the Cloud Function. |
environment |
Optional | Runtime environment for the Cloud Function. Allowed values are "GEN_1" and
"GEN_2", but this GitHub Action only provides support for "GEN_2". Default: GEN_2 |
kms_key_name |
Optional | Resource name of a Google Cloud KMS crypto key used to encrypt/decrypt function resources. If specified, you must also provide an artifact registry repository using the 'docker_repository' field that was created with the same key. |
labels |
Optional | List of labels that should be set on the function. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml labels: |- labela=my-label labelb=my-other-label ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. Google Cloud restricts the allowed values and length for labels. Please see the Google Cloud documentation for labels for more information. |
source_dir |
Optional | Path on disk to the root of the the function's source code. Defaults to
current directory. This does NOT follow symlinks to directories or files
when generating the upload artifact.
**NOTE:** The function source code must exist on the GitHub Actions
filesystem. This means you must have `use: actions/checkout@v4` before the
deployment step!. Default: ./ |
runtime |
Required | Runtime for the function, such as "nodejs20". For a list of all available runtimes, run: ```sh $ gcloud functions runtimes list ``` The available runtimes change over time. |
build_environment_variables |
Optional | List of environment variables that should be set in the build environment. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml build_environment_variables: |- FRUIT=apple SENTENCE=" this will retain leading and trailing spaces " ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. Previous versions of this GitHub Action also included a separate input for sourcing values from a value, but this is no longer supported. Use a community action or script to read the file in a separate step and import the contents as an output. |
build_service_account |
Optional | Service account to be used for building the container. |
build_worker_pool |
Optional | Name of the Cloud Build Custom Worker Pool that should be used to build the function. The format of this field is: projects/[PROJECT]/locations/[REGION]/workerPools/[WORKER_POOL] where `[PROJECT]` and `[REGION]` are the project id and region respectively where the worker pool is defined and `[WORKER_POOL]` is the short name of the worker pool. If the project ID is not the same as the function, then the Cloud Functions Service Agent must be granted the role Cloud Build Custom Workers Builder in the project. |
docker_repository |
Optional | Repository in Artifact Registry to which the function docker image will be pushed after it is built by Cloud Build. If unspecified, Cloud Functions will create and use a repository named 'gcf-artifacts' for every deployed region. The value must match the pattern: projects/[PROJECT]/locations/[LOCATION]/repositories/[REPOSITORY]. Cross-project repositories are not supported. Cross-location repositories are not supported. Repository format must be 'DOCKER'. |
entry_point |
Optional | Name of a Google Cloud Function (as defined in source code) that will be executed. Defaults to the resource name suffix (ID of the function), if not specified. |
all_traffic_on_latest_revision |
Optional | If true, the latest function revision will be served all traffic. Default: True |
cpu |
Optional | The number of available CPUs to set (e.g. 0.5, 2, 2000m). By default, a new function's available CPUs is determined based on its memory value. |
memory |
Optional | The amount of memory available for the function to use. Allowed values are of the format: <number><unit> with allowed units of "k", "M", "G", "Ki", "Mi", "Gi" (e.g 128M, 10Mb, 1024Gi). For all generations, the default value is 256MB of memory. |
environment_variables |
Optional | List of environment variables that should be set in the runtime environment. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml environment_variables: |- FRUIT=apple SENTENCE=" this will retain leading and trailing spaces " ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. Previous versions of this GitHub Action also included a separate input for sourcing values from a value, but this is no longer supported. Use a community action or script to read the file in a separate step and import the contents as an output. |
ingress_settings |
Optional | Ingress settings controls what traffic can reach the function. Valid
values are "ALLOW_ALL", "ALLOW_INTERNAL_ONLY", and
"ALLOW_INTERNAL_AND_GCLB". Default: ALLOW_ALL |
max_instance_count |
Optional | Sets the maximum number of instances for the function. A function execution that would exceed max-instances times out. |
max_instance_request_concurrency |
Optional | Sets the maximum number of concurrent requests allowed per container instance. |
min_instance_count |
Optional | Sets the minimum number of instances for the function. This is helpful for reducing cold start times. |
secrets |
Optional | List of KEY=VALUE pairs to use as secrets. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. These can either be injected as environment variables or mounted as volumes. Keys starting with a forward slash '/' are mount paths. All other keys correspond to environment variables: ```yaml with: secrets: |- # As an environment variable: KEY1=projects/my-project/secrets/my-secret/versions/latest # As a volume mount: /secrets/api/key=projects/my-project/secrets/my-secret/versions/123 ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. |
service_account |
Optional | The email address of the IAM service account associated with the Cloud Run service for the function. The service account represents the identity of the running function, and determines what permissions the function has. If not provided, the function will use the project's default service account for Compute Engine. Note this differs from the service account used to deploy the Cloud Function, which is the currently-authenticated principal. However, the deploying service account must have permission to impersonate the runtime service account, which can be achieved by granting the deployment service account "roles/iam.serviceAccountUser" permission on the runtime service account. |
service_timeout |
Optional | The function execution timeout, specified as a time duration (e.g. "30s"
for 30 seconds). Default: 60s |
vpc_connector |
Optional | ID of the connector or fully qualified identifier for the connector. |
vpc_connector_egress_settings |
Optional | Egress settings controls what traffic is diverted through the VPC Access
Connector resource. Allowed values are "PRIVATE_RANGES_ONLY" and
"ALL_TRAFFIC". Default: PRIVATE_RANGES_ONLY |
event_trigger_location |
Optional | The location of the trigger, which must be a region or multi-region where the relevant events originate. |
event_trigger_type |
Optional | Specifies which action should trigger the function. For a list of acceptable values, run: ```sh $ gcloud functions event-types list ``` This usually requires the eventarc API to be enabled: ```sh $ gcloud services enable eventarc.googleapis.com ``` The available trigger types may change over time. |
event_trigger_filters |
Optional | List of event filters that the trigger should monitor. An event that matches all the filteres will trigger calls to the function. These are comma-separated or newline-separated `ATTRIBUTE=VALUE`. Attributes or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. To treat a value as a path pattern, prefix the value with the literal string `PATTERN:`. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml event_trigger_type: 'google.cloud.audit.log.v1.written' event_trigger_filters: |- serviceName=compute.googleapis.com methodName=PATTERN:compute.instances.* ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. For more information, see [Eventarc Triggers](https://cloud.google.com/functions/docs/calling/eventarc) and [Eventarc Path Patterns](https://cloud.google.com/eventarc/docs/path-patterns). |
event_trigger_pubsub_topic |
Optional | Name of Google Cloud Pub/Sub topic. Every message published in this topic will trigger function execution with message contents passed as input data of the format: projects/[PROJECT]/topics/[TOPIC] The service account must have permissions on this topic. |
event_trigger_service_account |
Optional | The email address of the IAM service account associated with the Eventarc trigger for the function. This is used for authenticated invocation. |
event_trigger_retry |
Optional | Describes whether event triggers should retry in case of function's
execution failure. Default: True |
event_trigger_channel |
Optional | The name of the channel associated with the trigger in the format: projects/[PROJECT]/locations/[LOCATION]/channels/<channel> You must provide a channel to receive events from Eventarc SaaS partners. |
| Name | Description |
|---|---|
name |
Full resource name of the Cloud Function, of the format: projects/[PROJECT]/locations/[LOCATION]/functions/<function> |
url |
The URL of your Cloud Function. |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Deploy to Cloud Functions'
author: 'Google LLC'
description: |-
Use this action to deploy code to Google Cloud Functions.
inputs:
#
# Google Cloud
# ------------
project_id:
description: |-
ID of the Google Cloud project in which to deploy the service. The default
value is computed from the environment.
required: false
region:
description: |-
Region in which the function should be deployed.
default: 'us-central1'
required: false
universe:
description: |-
The Google Cloud universe to use for constructing API endpoints. Trusted
Partner Cloud and Google Distributed Hosted Cloud should set this to their
universe address.
You can also override individual API endpoints by setting the environment
variable `GHA_ENDPOINT_OVERRIDE_<endpoint>` where `<endpoint>` is the API
endpoint to override. For example:
```yaml
env:
GHA_ENDPOINT_OVERRIDE_oauth2: 'https://oauth2.myapi.endpoint/v1'
```
For more information about universes, see the Google Cloud documentation.
default: 'googleapis.com'
required: false
#
# Top-level
# ---------
name:
description: |-
Name of the Cloud Function.
required: true
description:
description: |-
Human-friendly description of the Cloud Function.
required: false
environment:
description: |-
Runtime environment for the Cloud Function. Allowed values are "GEN_1" and
"GEN_2", but this GitHub Action only provides support for "GEN_2".
default: 'GEN_2'
required: false
kms_key_name:
description: |-
Resource name of a Google Cloud KMS crypto key used to encrypt/decrypt
function resources. If specified, you must also provide an artifact
registry repository using the 'docker_repository' field that was created
with the same key.
required: false
labels:
description: |-
List of labels that should be set on the function. These are
comma-separated or newline-separated `KEY=VALUE`. Keys or values that
contain separators must be escaped with a backslash (e.g. `\,` or `\\n`)
unless quoted. Any leading or trailing whitespace is trimmed unless values
are quoted.
```yaml
labels: |-
labela=my-label
labelb=my-other-label
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
Google Cloud restricts the allowed values and length for labels. Please
see the Google Cloud documentation for labels for more information.
required: false
source_dir:
description: |-
Path on disk to the root of the the function's source code. Defaults to
current directory. This does NOT follow symlinks to directories or files
when generating the upload artifact.
**NOTE:** The function source code must exist on the GitHub Actions
filesystem. This means you must have `use: actions/checkout@v4` before the
deployment step!.
default: './'
required: false
#
# buildConfig
# -----------
runtime:
description: |-
Runtime for the function, such as "nodejs20". For a list of all available
runtimes, run:
```sh
$ gcloud functions runtimes list
```
The available runtimes change over time.
required: true
build_environment_variables:
description: |-
List of environment variables that should be set in the build environment.
These are comma-separated or newline-separated `KEY=VALUE`. Keys or values
that contain separators must be escaped with a backslash (e.g. `\,` or
`\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless
values are quoted.
```yaml
build_environment_variables: |-
FRUIT=apple
SENTENCE=" this will retain leading and trailing spaces "
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
Previous versions of this GitHub Action also included a separate input for
sourcing values from a value, but this is no longer supported. Use a
community action or script to read the file in a separate step and import
the contents as an output.
required: false
build_service_account:
description: |-
Service account to be used for building the container.
required: false
build_worker_pool:
description: |-
Name of the Cloud Build Custom Worker Pool that should be used to build
the function. The format of this field is:
projects/[PROJECT]/locations/[REGION]/workerPools/[WORKER_POOL]
where `[PROJECT]` and `[REGION]` are the project id and region
respectively where the worker pool is defined and `[WORKER_POOL]` is the
short name of the worker pool.
If the project ID is not the same as the function, then the Cloud
Functions Service Agent must be granted the role Cloud Build Custom
Workers Builder in the project.
required: false
docker_repository:
description: |-
Repository in Artifact Registry to which the function docker image will be
pushed after it is built by Cloud Build. If unspecified, Cloud Functions
will create and use a repository named 'gcf-artifacts' for every deployed
region.
The value must match the pattern:
projects/[PROJECT]/locations/[LOCATION]/repositories/[REPOSITORY].
Cross-project repositories are not supported. Cross-location repositories
are not supported. Repository format must be 'DOCKER'.
required: false
entry_point:
description: |-
Name of a Google Cloud Function (as defined in source code) that will be
executed. Defaults to the resource name suffix (ID of the function), if
not specified.
required: false
#
# serviceConfig
# -------------
all_traffic_on_latest_revision:
description: |-
If true, the latest function revision will be served all traffic.
default: true
required: false
cpu:
description: |-
The number of available CPUs to set (e.g. 0.5, 2, 2000m). By default, a
new function's available CPUs is determined based on its memory value.
required: false
memory:
description: |-
The amount of memory available for the function to use. Allowed values are
of the format: <number><unit> with allowed units of "k", "M", "G", "Ki",
"Mi", "Gi" (e.g 128M, 10Mb, 1024Gi).
For all generations, the default value is 256MB of memory.
required: false
environment_variables:
description: |-
List of environment variables that should be set in the runtime
environment. These are comma-separated or newline-separated `KEY=VALUE`.
Keys or values that contain separators must be escaped with a backslash
(e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is
trimmed unless values are quoted.
```yaml
environment_variables: |-
FRUIT=apple
SENTENCE=" this will retain leading and trailing spaces "
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
Previous versions of this GitHub Action also included a separate input for
sourcing values from a value, but this is no longer supported. Use a
community action or script to read the file in a separate step and import
the contents as an output.
required: false
ingress_settings:
description: |-
Ingress settings controls what traffic can reach the function. Valid
values are "ALLOW_ALL", "ALLOW_INTERNAL_ONLY", and
"ALLOW_INTERNAL_AND_GCLB".
default: 'ALLOW_ALL'
required: false
max_instance_count:
description: |-
Sets the maximum number of instances for the function. A function
execution that would exceed max-instances times out.
required: false
max_instance_request_concurrency:
description: |-
Sets the maximum number of concurrent requests allowed per container
instance.
required: false
min_instance_count:
description: |-
Sets the minimum number of instances for the function. This is helpful for
reducing cold start times.
required: false
secrets:
description: |-
List of KEY=VALUE pairs to use as secrets. These are comma-separated or
newline-separated `KEY=VALUE`. Keys or values that contain separators must
be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any
leading or trailing whitespace is trimmed unless values are quoted.
These can either be injected as environment variables or mounted as
volumes. Keys starting with a forward slash '/' are mount paths. All other
keys correspond to environment variables:
```yaml
with:
secrets: |-
# As an environment variable:
KEY1=projects/my-project/secrets/my-secret/versions/latest
# As a volume mount:
/secrets/api/key=projects/my-project/secrets/my-secret/versions/123
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
required: false
service_account:
description: |-
The email address of the IAM service account associated with the Cloud Run
service for the function. The service account represents the identity of
the running function, and determines what permissions the function has. If
not provided, the function will use the project's default service account
for Compute Engine.
Note this differs from the service account used to deploy the Cloud
Function, which is the currently-authenticated principal. However, the
deploying service account must have permission to impersonate the runtime
service account, which can be achieved by granting the deployment service
account "roles/iam.serviceAccountUser" permission on the runtime service
account.
required: false
service_timeout:
description: |-
The function execution timeout, specified as a time duration (e.g. "30s"
for 30 seconds).
default: '60s'
required: false
vpc_connector:
description: |-
ID of the connector or fully qualified identifier for the connector.
required: false
vpc_connector_egress_settings:
description: |-
Egress settings controls what traffic is diverted through the VPC Access
Connector resource. Allowed values are "PRIVATE_RANGES_ONLY" and
"ALL_TRAFFIC".
default: 'PRIVATE_RANGES_ONLY'
required: false
#
# eventTrigger
# -------------
event_trigger_location:
description: |-
The location of the trigger, which must be a region or multi-region where
the relevant events originate.
required: false
event_trigger_type:
description: |-
Specifies which action should trigger the function. For a list of
acceptable values, run:
```sh
$ gcloud functions event-types list
```
This usually requires the eventarc API to be enabled:
```sh
$ gcloud services enable eventarc.googleapis.com
```
The available trigger types may change over time.
required: false
event_trigger_filters:
description: |-
List of event filters that the trigger should monitor. An event that
matches all the filteres will trigger calls to the function. These are
comma-separated or newline-separated `ATTRIBUTE=VALUE`. Attributes or
values that contain separators must be escaped with a backslash (e.g. `\,`
or `\\n`) unless quoted. To treat a value as a path pattern, prefix the
value with the literal string `PATTERN:`. Any leading or trailing
whitespace is trimmed unless values are quoted.
```yaml
event_trigger_type: 'google.cloud.audit.log.v1.written'
event_trigger_filters: |-
serviceName=compute.googleapis.com
methodName=PATTERN:compute.instances.*
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
For more information, see [Eventarc
Triggers](https://cloud.google.com/functions/docs/calling/eventarc) and
[Eventarc Path
Patterns](https://cloud.google.com/eventarc/docs/path-patterns).
event_trigger_pubsub_topic:
description: |-
Name of Google Cloud Pub/Sub topic. Every message published in this topic
will trigger function execution with message contents passed as input
data of the format:
projects/[PROJECT]/topics/[TOPIC]
The service account must have permissions on this topic.
required: false
event_trigger_service_account:
description: |-
The email address of the IAM service account associated with the Eventarc
trigger for the function. This is used for authenticated invocation.
required: false
event_trigger_retry:
description: |-
Describes whether event triggers should retry in case of function's
execution failure.
default: true
required: false
event_trigger_channel:
description: |-
The name of the channel associated with the trigger in the format:
projects/[PROJECT]/locations/[LOCATION]/channels/<channel>
You must provide a channel to receive events from Eventarc SaaS partners.
required: false
outputs:
name:
description: |-
Full resource name of the Cloud Function, of the format:
projects/[PROJECT]/locations/[LOCATION]/functions/<function>
url:
description: |-
The URL of your Cloud Function.
branding:
icon: 'code'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/dailydotdev/action-devcard
Author: Ole-Martin Bratteng
Publisher: dailydotdev
Repository: github.com/dailydotdev/action-devcard
GitHub Action to download the devcard from daily.dev
| Name | Required | Description |
|---|---|---|
user_id |
Required | Your daily.dev user id |
type |
Optional | Configure orientation for devcard. Must be either "default" or "wide" Default: default |
token |
Optional | GitHub Token used to commit the devcard Default: ${{ github.token }} |
commit_branch |
Optional | Branch used to commit the devcard |
commit_message |
Optional | Commit message Default: Update ${filename} |
commit_filename |
Optional | Filename to save devcard to Default: devcard.png |
committer_email |
Optional | Committer email Default: 41898282+github-actions[bot]@users.noreply.github.com |
committer_name |
Optional | Committer name Default: github-actions[bot] |
dryrun |
Optional | If true, no changes will be made to the repo |
name: '@dailydotdev/devcard'
description: 'GitHub Action to download the devcard from daily.dev'
author: 'Ole-Martin Bratteng'
branding:
icon: user-check
color: gray-dark
inputs:
user_id:
description: 'Your daily.dev user id'
required: true
type:
description: 'Configure orientation for devcard. Must be either "default" or "wide"'
default: default
required: false
token:
description: GitHub Token used to commit the devcard
default: ${{ github.token }}
required: false
commit_branch:
description: Branch used to commit the devcard
default: ""
required: false
commit_message:
description: Commit message
default: Update ${filename}
required: false
commit_filename:
description: Filename to save devcard to
default: devcard.png
required: false
committer_email:
description: Committer email
default: 41898282+github-actions[bot]@users.noreply.github.com
required: false
committer_name:
description: Committer name
default: github-actions[bot]
required: false
dryrun:
description: 'If true, no changes will be made to the repo'
default: false
required: false
runs:
using: 'node20'
main: "dist/index.js"
Action ID: marketplace/MansaGroup/nrwl-nx-action
Author: Mansa Group
Publisher: MansaGroup
Repository: github.com/MansaGroup/nrwl-nx-action
Wrap the Nx monorepo tool and execute tasks depending on the action context
| Name | Required | Description |
|---|---|---|
targets |
Required | Comma-separated list of targets to execute |
projects |
Optional | Comma-separated list of projects to use |
all |
Optional | Run the targets on all projects Default: false |
affected |
Optional | Use the Nx' affected subcommand to only process affected projects Default: true |
parallel |
Optional | Number of tasks to execute in parallel (can be expensive) Default: 3 |
args |
Optional | Optional arguments to append to the Nx command |
nxCloud |
Optional | Enable support for Nx Cloud Default: false |
workingDirectory |
Optional | Path to the Nx workspace, needed if not the repository root |
name: 'Nrwl Nx'
author: 'Mansa Group'
description: 'Wrap the Nx monorepo tool and execute tasks depending on the action context'
inputs:
targets:
description: 'Comma-separated list of targets to execute'
required: true
projects:
description: 'Comma-separated list of projects to use'
required: false
all: # Mutualy exclusive with `affected`
description: 'Run the targets on all projects'
required: false
default: 'false'
affected: # Mutualy exclusive with `all`
description: "Use the Nx' affected subcommand to only process affected projects"
required: false
default: 'true'
parallel:
description: 'Number of tasks to execute in parallel (can be expensive)'
required: false
default: '3'
args:
description: 'Optional arguments to append to the Nx command'
required: false
default: ''
nxCloud:
description: 'Enable support for Nx Cloud'
required: false
default: 'false'
workingDirectory:
description: 'Path to the Nx workspace, needed if not the repository root'
required: false
runs:
using: 'node16'
main: 'dist/index.js'
branding:
icon: 'terminal'
color: 'blue'
Action ID: marketplace/vsoch/pull-request-action
Author: vsoch
Publisher: vsoch
Repository: github.com/vsoch/pull-request-action
A GitHub action to open a pull request
| Name | Description |
|---|---|
pull_request_number |
If the pull request is opened, this is the number for it. |
pull_request_url |
If the pull request is opened, the html url for it. |
pull_request_return_code |
The pull request return code. |
assignees_return_code |
The add assignees post return code. |
reviewers_return_code |
The add reviewers post return code. |
name: 'Pull Request Action'
description: 'A GitHub action to open a pull request'
author: 'vsoch'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'activity'
color: 'yellow'
outputs:
pull_request_number:
description: 'If the pull request is opened, this is the number for it.'
pull_request_url:
description: 'If the pull request is opened, the html url for it.'
pull_request_return_code:
description: 'The pull request return code.'
assignees_return_code:
description: 'The add assignees post return code.'
reviewers_return_code:
description: 'The add reviewers post return code.'
Action ID: marketplace/dflook/terraform-plan
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-plan
Create a Terraform plan
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform root module to generate a plan for. Default: . |
workspace |
Optional | Terraform workspace to run the plan for. Default: default |
label |
Optional | A friendly name for the environment the Terraform configuration is for. This will be used in the PR comment for easy identification. If this is set, it must be the same as the `label` used in any corresponding [`dflook/terraform-apply`](https://github.com/dflook/terraform-github-actions/tree/main/terraform-apply) action. |
variables |
Optional | Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
replace |
Optional | List of resources to replace, one per line. |
target |
Optional | List of resources to target, one per line. The plan will be limited to these resources and their dependencies. |
destroy |
Optional | Set to `true` to generate a plan to destroy all resources.
This generates a plan in [destroy mode](https://developer.hashicorp.com/terraform/cli/commands/plan#planning-modes).
Default: false |
refresh |
Optional | Set to `false` to skip synchronisation of the Terraform state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
Default: true |
add_github_comment |
Optional | Controls whether a comment is added to the PR with the generated plan.
The default is `true`, which adds a comment to the PR with the results of the plan.
Set to `changes-only` to add a comment only when the plan indicates there are changes to apply.
Set to `always-new` to always create a new comment for each plan, instead of updating the previous comment.
Set to `false` to disable the comment - the plan will still appear in the workflow log.
Default: true |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
changes |
Set to 'true' if the plan would apply any changes, 'false' if it wouldn't. |
plan_path |
This is the path to the generated plan in an opaque binary format. The path is relative to the Actions workspace. The plan can be used as the `plan_file` input to the [dflook/terraform-apply](https://github.com/dflook/terraform-github-actions/tree/main/terraform-apply) action. Terraform plans often contain sensitive information, so this output should be treated with care. |
json_plan_path |
This is the path to the generated plan in [JSON Output Format](https://www.terraform.io/docs/internals/json-format.html). The path is relative to the Actions workspace. Terraform plans often contain sensitive information, so this output should be treated with care. |
text_plan_path |
This is the path to the generated plan in a human-readable format. The path is relative to the Actions workspace. |
to_add |
The number of resources that would be affected by this operation. |
to_change |
The number of resources that would be affected by this operation. |
to_destroy |
The number of resources that would be affected by this operation. |
to_move |
The number of resources that would be affected by this operation. |
to_import |
The number of resources that would be affected by this operation. |
run_id |
If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id. |
name: terraform-plan
description: Create a Terraform plan
author: Daniel Flook
inputs:
path:
description: The path to the Terraform root module to generate a plan for.
required: false
default: "."
workspace:
description: Terraform workspace to run the plan for.
required: false
default: "default"
label:
description: |
A friendly name for the environment the Terraform configuration is for.
This will be used in the PR comment for easy identification.
If this is set, it must be the same as the `label` used in any corresponding [`dflook/terraform-apply`](https://github.com/dflook/terraform-github-actions/tree/main/terraform-apply) action.
required: false
default: ""
variables:
description: |
Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
replace:
description: List of resources to replace, one per line.
required: false
default: ""
target:
description: |
List of resources to target, one per line.
The plan will be limited to these resources and their dependencies.
required: false
default: ""
destroy:
description: |
Set to `true` to generate a plan to destroy all resources.
This generates a plan in [destroy mode](https://developer.hashicorp.com/terraform/cli/commands/plan#planning-modes).
required: false
default: "false"
refresh:
description: |
Set to `false` to skip synchronisation of the Terraform state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
required: false
default: "true"
add_github_comment:
description: |
Controls whether a comment is added to the PR with the generated plan.
The default is `true`, which adds a comment to the PR with the results of the plan.
Set to `changes-only` to add a comment only when the plan indicates there are changes to apply.
Set to `always-new` to always create a new comment for each plan, instead of updating the previous comment.
Set to `false` to disable the comment - the plan will still appear in the workflow log.
required: false
default: "true"
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
changes:
description: Set to 'true' if the plan would apply any changes, 'false' if it wouldn't.
plan_path:
description: |
This is the path to the generated plan in an opaque binary format.
The path is relative to the Actions workspace.
The plan can be used as the `plan_file` input to the [dflook/terraform-apply](https://github.com/dflook/terraform-github-actions/tree/main/terraform-apply) action.
Terraform plans often contain sensitive information, so this output should be treated with care.
json_plan_path:
description: |
This is the path to the generated plan in [JSON Output Format](https://www.terraform.io/docs/internals/json-format.html).
The path is relative to the Actions workspace.
Terraform plans often contain sensitive information, so this output should be treated with care.
text_plan_path:
description: |
This is the path to the generated plan in a human-readable format.
The path is relative to the Actions workspace.
to_add:
description: The number of resources that would be affected by this operation.
to_change:
description: The number of resources that would be affected by this operation.
to_destroy:
description: The number of resources that would be affected by this operation.
to_move:
description: The number of resources that would be affected by this operation.
to_import:
description: The number of resources that would be affected by this operation.
run_id:
description: If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/plan.sh
branding:
icon: globe
color: purple
Action ID: marketplace/8BitJonny/gh-get-current-pr
Author: 8BitJonny
Publisher: 8BitJonny
Repository: github.com/8BitJonny/gh-get-current-pr
Get the PR associated with the current commit.
| Name | Required | Description |
|---|---|---|
github-token |
Optional | The GitHub token used to create an authenticated client. Default: ${{ github.token }} |
sha |
Optional | Sha to get PR for. Defaults to current sha. |
filterOutClosed |
Optional | True, False, 1 or 0 if only open PRs should be returned. Defaults to false. |
filterOutDraft |
Optional | True, False, 1 or 0 if only non-draft PRs should be returned. Defaults to false. |
| Name | Description |
|---|---|
pr_found |
The outcome if a PR has been found. If so, the other outputs are set. |
pr |
The whole PR object if one was found. |
number |
The PR number if one was found. |
pr_title |
The PR Title if one was found. |
pr_body |
The PR Body if one was found. |
pr_url |
The PR Url if one was found. |
pr_created_at |
The PR Created timestamp if one was found. |
pr_merged_at |
The PR Merged timestamp if one was found. |
pr_closed_at |
The PR Closed timestamp if one was found. |
pr_labels |
The PR Labels if any was found. |
name: Get Current Pull Request
author: 8BitJonny
description: Get the PR associated with the current commit.
inputs:
github-token:
description: The GitHub token used to create an authenticated client.
required: false
default: ${{ github.token }}
sha:
description: Sha to get PR for. Defaults to current sha.
required: false
filterOutClosed:
description: True, False, 1 or 0 if only open PRs should be returned. Defaults to false.
required: false
filterOutDraft:
description: True, False, 1 or 0 if only non-draft PRs should be returned. Defaults to false.
required: false
outputs:
pr_found:
description: The outcome if a PR has been found. If so, the other outputs are set.
pr:
description: The whole PR object if one was found.
number:
description: The PR number if one was found.
pr_title:
description: The PR Title if one was found.
pr_body:
description: The PR Body if one was found.
pr_url:
description: The PR Url if one was found.
pr_created_at:
description: The PR Created timestamp if one was found.
pr_merged_at:
description: The PR Merged timestamp if one was found.
pr_closed_at:
description: The PR Closed timestamp if one was found.
pr_labels:
description: The PR Labels if any was found.
runs:
using: node20
main: 'dist/index.js'
branding:
icon: git-pull-request
color: green
Action ID: marketplace/kamranahmedse/github-pages-blog-action
Author: Kamran Ahmed <kamranahmed.se@gmail.com>
Publisher: kamranahmedse
Repository: github.com/kamranahmedse/github-pages-blog-action
This action allows you to create a blog from your markdown files and deploy to GitHub pages
| Name | Required | Description |
|---|---|---|
token |
Optional | This option defaults to the repository scoped GitHub Token.
However if you need more permissions for things such as deploying to another repository, you can add a Personal Access Token (PAT) here. This should be stored in the `secrets / with` menu **as a secret**.
Default: ${{ github.token }} |
branch |
Optional | This is the branch you wish to deploy to, for example gh-pages or docs. Default: gh-pages |
| Name | Description |
|---|---|
deployment-status |
The status of the deployment that indicates if the run failed or passed. Possible outputs include: success|failed|skipped |
name: 'Create Blog from Markdown Files'
description: 'This action allows you to create a blog from your markdown files and deploy to GitHub pages'
author: 'Kamran Ahmed <kamranahmed.se@gmail.com>'
runs:
using: 'node16'
main: 'dist/index.js'
branding:
icon: 'git-commit'
color: 'orange'
inputs:
token:
description: >
This option defaults to the repository scoped GitHub Token.
However if you need more permissions for things such as deploying to another
repository, you can add a Personal Access Token (PAT) here. This should be
stored in the `secrets / with` menu **as a secret**.
required: false
default: ${{ github.token }}
branch:
description: 'This is the branch you wish to deploy to, for example gh-pages or docs.'
required: false
default: 'gh-pages'
outputs:
deployment-status:
description: 'The status of the deployment that indicates if the run failed or passed. Possible outputs include: success|failed|skipped'
Action ID: marketplace/appleboy/gitlab-ci-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/gitlab-ci-action
Triggering GitLab CI Pipeline through the API
| Name | Required | Description |
|---|---|---|
host |
Optional | gitlab-ci base url Default: https://gitlab.com |
token |
Required | gitlab-ci token |
ref |
Optional | gitlab-ci valid refs are only the branches and tags Default: main |
project_id |
Required | gitlab project id |
insecure |
Optional | insecure mode |
variables |
Optional | pass CI/CD variables |
debug |
Optional | debug mode |
timeout |
Optional | timeout waiting for pipeline to complete |
interval |
Optional | interval waiting for pipeline to complete |
wait |
Optional | wait for pipeline to complete |
name: "Trigger GitLab CI Pipeline"
description: "Triggering GitLab CI Pipeline through the API"
author: "Bo-Yi Wu"
inputs:
host:
description: "gitlab-ci base url"
default: "https://gitlab.com"
token:
description: "gitlab-ci token"
required: true
ref:
description: "gitlab-ci valid refs are only the branches and tags"
default: "main"
project_id:
description: "gitlab project id"
required: true
insecure:
description: "insecure mode"
variables:
description: "pass CI/CD variables"
debug:
description: "debug mode"
timeout:
description: "timeout waiting for pipeline to complete"
interval:
description: "interval waiting for pipeline to complete"
wait:
description: "wait for pipeline to complete"
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "check-circle"
color: "orange"
Action ID: marketplace/actions-hub/docker
Author: Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>
Publisher: actions-hub
Repository: github.com/actions-hub/docker
GitHub Action with docker cli
| Name | Required | Description |
|---|---|---|
NAME |
Optional | Docker image (if not the same as repo) |
TAG |
Optional | Docker tag (if want to specify tag) |
DOCKER_USERNAME |
Optional | Docker username |
DOCKER_PASSWORD |
Optional | Docker password |
DOCKER_REGISTRY_URL |
Optional | Docker registry url |
SKIP_LOGIN |
Optional | Skip login |
name: 'The Docker CLI'
description: 'GitHub Action with docker cli'
author: 'Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>'
branding:
icon: 'box'
color: 'blue'
inputs:
NAME:
description: 'Docker image (if not the same as repo)'
TAG:
description: 'Docker tag (if want to specify tag)'
DOCKER_USERNAME:
description: 'Docker username'
DOCKER_PASSWORD:
description: 'Docker password'
DOCKER_REGISTRY_URL:
description: 'Docker registry url'
SKIP_LOGIN:
description: 'Skip login'
runs:
using: 'docker'
image: './cli/Dockerfile'
Action ID: marketplace/pgrimaud/action-lametric
Author: Pierre Grimaud
Publisher: pgrimaud
Repository: github.com/pgrimaud/action-lametric
Send notification to your LaMetric using Github Actions
name: 'Notify LaMetric'
author: 'Pierre Grimaud'
description: 'Send notification to your LaMetric using Github Actions'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'bell'
color: 'gray-dark'
Action ID: marketplace/actions/attest-build-provenance
Author: GitHub
Publisher: actions
Repository: github.com/actions/attest-build-provenance
Generate provenance attestations for build artifacts
| Name | Required | Description |
|---|---|---|
subject-path |
Optional | Path to the artifact serving as the subject of the attestation. Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". May contain a glob pattern or list of paths (total subject count cannot exceed 1024). |
subject-digest |
Optional | Digest of the subject for which provenance will be generated. Must be in the form "algorithm:hex_digest" (e.g. "sha256:abc123..."). Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". |
subject-name |
Optional | Subject name as it should appear in the attestation. Required when identifying the subject with the "subject-digest" input. |
subject-checksums |
Optional | Path to checksums file containing digest and name of subjects for attestation. Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". |
push-to-registry |
Optional | Whether to push the provenance statement to the image registry. Requires that the "subject-name" parameter specify the fully-qualified image name and that the "subject-digest" parameter be specified. Defaults to false. |
show-summary |
Optional | Whether to attach a list of generated attestations to the workflow run summary page. Defaults to true.
Default: True |
github-token |
Optional | The GitHub token used to make authenticated API requests.
Default: ${{ github.token }} |
| Name | Description |
|---|---|
bundle-path |
The path to the file containing the attestation bundle. |
attestation-id |
The ID of the attestation. |
attestation-url |
The URL for the attestation summary. |
name: 'Attest Build Provenance'
description: 'Generate provenance attestations for build artifacts'
author: 'GitHub'
branding:
color: 'blue'
icon: 'lock'
inputs:
subject-path:
description: >
Path to the artifact serving as the subject of the attestation. Must
specify exactly one of "subject-path", "subject-digest", or
"subject-checksums". May contain a glob pattern or list of paths
(total subject count cannot exceed 1024).
required: false
subject-digest:
description: >
Digest of the subject for which provenance will be generated. Must be in
the form "algorithm:hex_digest" (e.g. "sha256:abc123..."). Must specify
exactly one of "subject-path", "subject-digest", or "subject-checksums".
required: false
subject-name:
description: >
Subject name as it should appear in the attestation. Required when
identifying the subject with the "subject-digest" input.
subject-checksums:
description: >
Path to checksums file containing digest and name of subjects for
attestation. Must specify exactly one of "subject-path", "subject-digest",
or "subject-checksums".
required: false
push-to-registry:
description: >
Whether to push the provenance statement to the image registry. Requires
that the "subject-name" parameter specify the fully-qualified image name
and that the "subject-digest" parameter be specified. Defaults to false.
default: false
required: false
show-summary:
description: >
Whether to attach a list of generated attestations to the workflow run
summary page. Defaults to true.
default: true
required: false
github-token:
description: >
The GitHub token used to make authenticated API requests.
default: ${{ github.token }}
required: false
outputs:
bundle-path:
description: 'The path to the file containing the attestation bundle.'
value: ${{ steps.attest.outputs.bundle-path }}
attestation-id:
description: 'The ID of the attestation.'
value: ${{ steps.attest.outputs.attestation-id }}
attestation-url:
description: 'The URL for the attestation summary.'
value: ${{ steps.attest.outputs.attestation-url }}
runs:
using: 'composite'
steps:
- uses: actions/attest-build-provenance/predicate@864457a58d4733d7f1574bd8821fa24e02cf7538 # predicate@2.0.0
id: generate-build-provenance-predicate
- uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
id: attest
env:
NODE_OPTIONS: "--max-http-header-size=32768"
with:
subject-path: ${{ inputs.subject-path }}
subject-digest: ${{ inputs.subject-digest }}
subject-name: ${{ inputs.subject-name }}
subject-checksums: ${{ inputs.subject-checksums }}
predicate-type: ${{ steps.generate-build-provenance-predicate.outputs.predicate-type }}
predicate: ${{ steps.generate-build-provenance-predicate.outputs.predicate }}
push-to-registry: ${{ inputs.push-to-registry }}
show-summary: ${{ inputs.show-summary }}
github-token: ${{ inputs.github-token }}
Action ID: marketplace/calibreapp/github-actions
Author: Calibre
Publisher: calibreapp
Repository: github.com/calibreapp/github-actions
Effortlessly add Calibre to your GitHub workflows
| Name | Required | Description |
|---|---|---|
command |
Required | The Calibre CLI command to run |
name: "GitHub Actions"
author: "Calibre"
description: "Effortlessly add Calibre to your GitHub workflows"
inputs:
command:
description: The Calibre CLI command to run
required: true
runs:
using: "docker"
image: "docker://ghcr.io/calibreapp/github-actions/github-actions:main"
args:
- ${{ inputs.command }}
branding:
icon: "code"
color: "green"
Action ID: marketplace/amirisback/android-exoplayer
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-exoplayer
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/chinthakagodawita/autoupdate
Author: Unknown
Publisher: chinthakagodawita
Repository: github.com/chinthakagodawita/autoupdate
A GitHub Action that auto-updates PRs with changes from their destination branch
name: 'Auto Update'
description: 'A GitHub Action that auto-updates PRs with changes from their destination branch'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'git-pull-request'
color: 'blue'
Action ID: marketplace/mheap/github-action-pr-heroku-review-app
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-pr-heroku-review-app
Create a Heroku review app when a PR is raised by someone with write or admin access
name: Heroku Review Application
description: Create a Heroku review app when a PR is raised by someone with write or admin access
runs:
using: docker
image: Dockerfile
branding:
icon: book-open
color: orange
Action ID: marketplace/appleboy/ssh-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/ssh-action
Executing remote ssh commands
| Name | Required | Description |
|---|---|---|
host |
Optional | SSH host address or IP to connect to. |
port |
Optional | SSH port number for the connection. Default: 22 |
passphrase |
Optional | Passphrase to decrypt the SSH private key if protected. |
username |
Optional | SSH username for authentication on the remote server. |
password |
Optional | SSH password for authentication (use secrets for sensitive data). |
protocol |
Optional | IP protocol version to use. Options: "tcp" (default), "tcp4" (IPv4 only), or "tcp6" (IPv6 only). Default: tcp |
sync |
Optional | When true, executes commands synchronously across multiple hosts (one after another). |
use_insecure_cipher |
Optional | Enable additional legacy ciphers that might be less secure but more compatible with older systems. |
cipher |
Optional | Specify custom cipher algorithms for encryption. Leave empty to use secure defaults. |
timeout |
Optional | Maximum time to wait when establishing the SSH connection, e.g., '30s', '1m'. Default: 30s |
command_timeout |
Optional | Maximum execution time for the remote commands before terminating, e.g., '10m', '1h'. Default: 10m |
key |
Optional | Raw content of the SSH private key for authentication (use secrets for sensitive data). |
key_path |
Optional | Path to the SSH private key file on the runner. |
fingerprint |
Optional | SHA256 fingerprint of the host public key for verification to prevent MITM attacks. |
proxy_host |
Optional | Proxy server hostname or IP if connecting through an SSH jump host. |
proxy_port |
Optional | SSH port number for the proxy connection. Default: 22 |
proxy_username |
Optional | Username for authentication on the proxy server. |
proxy_password |
Optional | Password for authentication on the proxy server (use secrets for sensitive data). |
proxy_protocol |
Optional | IP protocol version for proxy. Options: "tcp" (default), "tcp4" (IPv4 only), or "tcp6" (IPv6 only). Default: tcp |
proxy_passphrase |
Optional | Passphrase to decrypt the proxy SSH private key if protected. |
proxy_timeout |
Optional | Maximum time to wait when establishing the proxy SSH connection, e.g., '30s', '1m'. Default: 30s |
proxy_key |
Optional | Raw content of the SSH proxy private key for authentication (use secrets for sensitive data). |
proxy_key_path |
Optional | Path to the SSH proxy private key file on the runner. |
proxy_fingerprint |
Optional | SHA256 fingerprint of the proxy host public key for verification. |
proxy_cipher |
Optional | Specify custom cipher algorithms for proxy connection encryption. |
proxy_use_insecure_cipher |
Optional | Enable additional legacy ciphers for proxy connections (less secure but more compatible). |
script |
Optional | Commands to execute on the remote server (inline script string). |
script_path |
Optional | Path to a local file containing commands to execute on the remote server. |
envs |
Optional | Environment variables to expose to the remote script, format: key=value,key2=value2. |
envs_format |
Optional | Format specification for environment variable transfer (for advanced usage). |
debug |
Optional | Set to true to enable verbose logging for troubleshooting connection issues. |
allenvs |
Optional | When true, passes all GitHub Actions environment variables to the remote script. |
request_pty |
Optional | Request a pseudo-terminal from the server (required for interactive commands or sudo). |
curl_insecure |
Optional | When true, uses the --insecure option with curl for insecure downloads. Default: false |
capture_stdout |
Optional | When true, captures and returns standard output from the commands as action output. Default: false |
version |
Optional | The version of drone-ssh to use. |
| Name | Description |
|---|---|
stdout |
Standard output of the executed commands when capture_stdout is enabled. |
name: "SSH Remote Commands"
description: "Executing remote ssh commands"
author: "Bo-Yi Wu"
inputs:
host:
description: "SSH host address or IP to connect to."
port:
description: "SSH port number for the connection."
default: "22"
passphrase:
description: "Passphrase to decrypt the SSH private key if protected."
username:
description: "SSH username for authentication on the remote server."
password:
description: "SSH password for authentication (use secrets for sensitive data)."
protocol:
description: 'IP protocol version to use. Options: "tcp" (default), "tcp4" (IPv4 only), or "tcp6" (IPv6 only).'
default: "tcp"
sync:
description: "When true, executes commands synchronously across multiple hosts (one after another)."
use_insecure_cipher:
description: "Enable additional legacy ciphers that might be less secure but more compatible with older systems."
cipher:
description: "Specify custom cipher algorithms for encryption. Leave empty to use secure defaults."
timeout:
description: "Maximum time to wait when establishing the SSH connection, e.g., '30s', '1m'."
default: "30s"
command_timeout:
description: "Maximum execution time for the remote commands before terminating, e.g., '10m', '1h'."
default: "10m"
key:
description: "Raw content of the SSH private key for authentication (use secrets for sensitive data)."
key_path:
description: "Path to the SSH private key file on the runner."
fingerprint:
description: "SHA256 fingerprint of the host public key for verification to prevent MITM attacks."
proxy_host:
description: "Proxy server hostname or IP if connecting through an SSH jump host."
proxy_port:
description: "SSH port number for the proxy connection."
default: "22"
proxy_username:
description: "Username for authentication on the proxy server."
proxy_password:
description: "Password for authentication on the proxy server (use secrets for sensitive data)."
proxy_protocol:
description: 'IP protocol version for proxy. Options: "tcp" (default), "tcp4" (IPv4 only), or "tcp6" (IPv6 only).'
default: "tcp"
proxy_passphrase:
description: "Passphrase to decrypt the proxy SSH private key if protected."
proxy_timeout:
description: "Maximum time to wait when establishing the proxy SSH connection, e.g., '30s', '1m'."
default: "30s"
proxy_key:
description: "Raw content of the SSH proxy private key for authentication (use secrets for sensitive data)."
proxy_key_path:
description: "Path to the SSH proxy private key file on the runner."
proxy_fingerprint:
description: "SHA256 fingerprint of the proxy host public key for verification."
proxy_cipher:
description: "Specify custom cipher algorithms for proxy connection encryption."
proxy_use_insecure_cipher:
description: "Enable additional legacy ciphers for proxy connections (less secure but more compatible)."
script:
description: "Commands to execute on the remote server (inline script string)."
script_path:
description: "Path to a local file containing commands to execute on the remote server."
envs:
description: "Environment variables to expose to the remote script, format: key=value,key2=value2."
envs_format:
description: "Format specification for environment variable transfer (for advanced usage)."
debug:
description: "Set to true to enable verbose logging for troubleshooting connection issues."
allenvs:
description: "When true, passes all GitHub Actions environment variables to the remote script."
request_pty:
description: "Request a pseudo-terminal from the server (required for interactive commands or sudo)."
curl_insecure:
description: "When true, uses the --insecure option with curl for insecure downloads."
default: "false"
capture_stdout:
description: "When true, captures and returns standard output from the commands as action output."
default: "false"
version:
description: |
The version of drone-ssh to use.
outputs:
stdout:
description: "Standard output of the executed commands when capture_stdout is enabled."
value: ${{ steps.entrypoint.outputs.stdout }}
runs:
using: "composite"
steps:
- name: Set GitHub Path
run: echo "$GITHUB_ACTION_PATH" >> $GITHUB_PATH
shell: bash
env:
GITHUB_ACTION_PATH: ${{ github.action_path }}
- id: entrypoint
name: Run entrypoint.sh
run: entrypoint.sh
shell: bash
env:
GITHUB_ACTION_PATH: ${{ github.action_path }}
INPUT_HOST: ${{ inputs.host }}
INPUT_PORT: ${{ inputs.port }}
INPUT_PROTOCOL: ${{ inputs.protocol }}
INPUT_USERNAME: ${{ inputs.username }}
INPUT_PASSWORD: ${{ inputs.password }}
INPUT_PASSPHRASE: ${{ inputs.passphrase }}
INPUT_KEY: ${{ inputs.key }}
INPUT_KEY_PATH: ${{ inputs.key_path }}
INPUT_FINGERPRINT: ${{ inputs.fingerprint }}
INPUT_PROXY_HOST: ${{ inputs.proxy_host }}
INPUT_PROXY_PORT: ${{ inputs.proxy_port }}
INPUT_PROXY_USERNAME: ${{ inputs.proxy_username }}
INPUT_PROXY_PASSWORD: ${{ inputs.proxy_password }}
INPUT_PROXY_PASSPHRASE: ${{ inputs.proxy_passphrase }}
INPUT_PROXY_KEY: ${{ inputs.proxy_key }}
INPUT_PROXY_KEY_PATH: ${{ inputs.proxy_key_path }}
INPUT_PROXY_FINGERPRINT: ${{ inputs.proxy_fingerprint }}
INPUT_TIMEOUT: ${{ inputs.timeout }}
INPUT_PROXY_TIMEOUT: ${{ inputs.proxy_timeout }}
INPUT_COMMAND_TIMEOUT: ${{ inputs.command_timeout }}
INPUT_SCRIPT: ${{ inputs.script }}
INPUT_SCRIPT_FILE: ${{ inputs.script_path }}
INPUT_ENVS: ${{ inputs.envs }}
INPUT_ENVS_FORMAT: ${{ inputs.envs_format }}
INPUT_DEBUG: ${{ inputs.debug }}
INPUT_ALL_ENVS: ${{ inputs.allenvs }}
INPUT_REQUEST_PTY: ${{ inputs.request_pty }}
INPUT_USE_INSECURE_CIPHER: ${{ inputs.use_insecure_cipher }}
INPUT_CIPHER: ${{ inputs.cipher }}
INPUT_PROXY_USE_INSECURE_CIPHER: ${{ inputs.proxy_use_insecure_cipher }}
INPUT_PROXY_CIPHER: ${{ inputs.proxy_cipher }}
INPUT_SYNC: ${{ inputs.sync }}
INPUT_CAPTURE_STDOUT: ${{ inputs.capture_stdout }}
INPUT_CURL_INSECURE: ${{ inputs.curl_insecure }}
DRONE_SSH_VERSION: ${{ inputs.version }}
branding:
icon: "terminal"
color: "gray-dark"
Action ID: marketplace/jidicula/go-fuzz-action
Author: Unknown
Publisher: jidicula
Repository: github.com/jidicula/go-fuzz-action
A GitHub Action for running go test -fuzz.
| Name | Required | Description |
|---|---|---|
packages |
Optional | Run fuzz test on these packages. Corresponds to the `[packages]` input for the `go test` command. Default: . |
fuzz-regexp |
Optional | Run the fuzz test matching the regular expression. Corresponds to the `-fuzz` flag for the `go test` command. Default: Fuzz |
fuzz-time |
Required | Fuzz target iteration duration, specified as a `time.Duration` (for example `1h30s`). Corresponds to `-fuzztime` flag for the `go test` command. Ensure this is less than your job timeout. |
fuzz-minimize-time |
Optional | Fuzz minimization duration, specified as a `time.Duration` (for example `1h30s`). Corresponds to `-fuzzminimizetime` flag for the `go test` command. If you provide this input, ensure it is less than your job timeout. Default: 10s |
go-version |
Optional | Which version of Go to use for fuzzing Default: 1.21 |
name: "Go fuzz test"
description: "A GitHub Action for running go test -fuzz."
branding:
icon: "check-circle"
color: "green"
inputs:
packages:
description: 'Run fuzz test on these packages. Corresponds to the `[packages]` input for the `go test` command.'
required: false
default: '.'
fuzz-regexp:
description: 'Run the fuzz test matching the regular expression. Corresponds to the `-fuzz` flag for the `go test` command.'
required: false
default: 'Fuzz'
fuzz-time:
description: 'Fuzz target iteration duration, specified as a `time.Duration` (for example `1h30s`). Corresponds to `-fuzztime` flag for the `go test` command. Ensure this is less than your job timeout.'
required: true
fuzz-minimize-time:
description: 'Fuzz minimization duration, specified as a `time.Duration` (for example `1h30s`). Corresponds to `-fuzzminimizetime` flag for the `go test` command. If you provide this input, ensure it is less than your job timeout.'
required: false
default: '10s'
go-version:
description: 'Which version of Go to use for fuzzing'
required: false
default: '1.21'
runs:
using: 'composite'
steps:
- uses: actions/checkout@v6
- uses: actions/setup-go@v6
with:
go-version: '${{ inputs.go-version }}'
- shell: bash
run: go test "${{ inputs.packages }}" -fuzz="${{ inputs.fuzz-regexp }}" -fuzztime="${{ inputs.fuzz-time }}" -fuzzminimizetime="${{ inputs.fuzz-minimize-time }}"
- name: Upload fuzz failure seed corpus as run artifact
if: failure()
uses: actions/upload-artifact@v5
with:
name: testdata
path: testdata
- run: echo "EVENT NAME IS ${{ github.event_name }}"
if: failure()
shell: bash
- name: Save PR head commit SHA
if: failure() && github.event_name == 'pull_request'
shell: bash
run: |
SHA="${{ github.event.pull_request.head.sha }}"
echo "SHA=$SHA" >> $GITHUB_ENV
- name: Save latest commit SHA if not PR
if: failure() && github.event_name != 'pull_request'
shell: bash
run: echo "SHA=${{ github.sha }}" >> $GITHUB_ENV
- name: Output message
if: failure()
shell: bash
run: |
MESSAGE='Fuzz test failed on commit ${{ env.SHA }}. To troubleshoot locally, use the [GitHub CLI](https://cli.github.com) to download the seed corpus with\n```\ngh run download ${{ github.run_id }} -n testdata\n```'
DEEPLINK="https://github.com/${{ github.repository }}/commit/${{ env.SHA }}"
echo -e "${MESSAGE/${{ env.SHA }}/$DEEPLINK}"
echo -e "${MESSAGE/${{ env.SHA }}/[${GITHUB_SHA:0:8}]($DEEPLINK)}" >> $GITHUB_STEP_SUMMARY
- name: Report failure
uses: actions/github-script@v8
if: failure() && github.event_name == 'pull_request'
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Fuzz test failed on commit ${{ env.SHA }}. To troubleshoot locally, use the [GitHub CLI](https://cli.github.com) to download the seed corpus with\n```\ngh run download ${{ github.run_id }} -n testdata\n```'
})
Action ID: marketplace/dflook/tofu-apply
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-apply
Apply an OpenTofu plan
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the OpenTofu root module to apply Default: . |
workspace |
Optional | OpenTofu workspace to run the apply in Default: default |
label |
Optional | A friendly name for the environment the OpenTofu configuration is for. This will be used in the PR comment for easy identification. It must be the same as the `label` used in the corresponding [`dflook/tofu-plan`](https://github.com/dflook/terraform-github-actions/tree/main/tofu-plan) action. |
variables |
Optional | Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
replace |
Optional | List of resources to replace, one per line. |
target |
Optional | List of resources to apply, one per line. The apply operation will be limited to these resources and their dependencies. |
exclude |
Optional | List of resources to exclude from the apply operation, one per line. The apply operation will include all resources except the specified ones and their dependencies. Requires OpenTofu 1.9+. |
destroy |
Optional | Set to `true` to destroy all resources.
This generates and applies a plan in [destroy mode](https://opentofu.org/docs/cli/commands/plan/#planning-modes).
Default: false |
refresh |
Optional | Set to `false` to skip synchronisation of the OpenTofu state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
Default: true |
plan_path |
Optional | Path to a plan file to apply. This would have been generated by a previous [`dflook/tofu-plan`](https://github.com/dflook/terraform-github-actions/tree/main/tofu-plan) action. The default behaviour when this is not set is to generate a plan from the current configuration and compare it to the plan attached to the PR comment. If it is logically the same, the plan will be applied. When this is set to a plan file, the plan will not be generated again. If it is the exact same plan as the one attached to the PR comment, it will be applied. This will be faster than generating a new plan. There are downsides to applying a stored plan: - The plan may contain sensitive information so must be stored securely, possibly outside of GitHub. - It does not account for any changes that have occurred since it was generated, and may no longer be correct. - Plans must be generated and applied in strict order. Multiple open PRs will cause conflicts if they are applied out of order. - Plans are not portable between platforms. - OpenTofu and provider versions must match between the plan generation and apply. When `auto_approve` is set to `true`, the plan will be applied without checking if it is the same as the one attached to the PR comment. |
auto_approve |
Optional | When set to `true`, plans are always applied.
The default is `false`, which requires plans to have been added to a pull request comment.
Default: false |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
json_plan_path |
This is the path to the generated plan in [JSON Output Format](https://opentofu.org/docs/internals/json-format/). The path is relative to the Actions workspace. OpenTofu plans often contain sensitive information, so this output should be treated with care. This won't be set if the backend type is `remote` - OpenTofu does not support saving remote plans. |
text_plan_path |
This is the path to the generated plan in a human-readable format. The path is relative to the Actions workspace. This won't be set if `auto_approve` is true while using a `remote` backend. |
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `apply-failed` - The Terraform apply operation failed. - `plan-changed` - The approved plan is no longer accurate, so the apply will not be attempted. - `state-locked` - The Terraform state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
run_id |
If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id. |
json_output_path |
This is the path to all the root module outputs in a JSON file. The path is relative to the Actions workspace. For example, with the OpenTofu config: ```hcl output "service_hostname" { value = "example.com" } ``` The file pointed to by this output will contain: ```json { "service_hostname": "example.com" } ``` OpenTofu list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object. |
name: tofu-apply
description: Apply an OpenTofu plan
author: Daniel Flook
inputs:
path:
description: Path to the OpenTofu root module to apply
required: false
default: "."
workspace:
description: OpenTofu workspace to run the apply in
required: false
default: "default"
label:
description: |
A friendly name for the environment the OpenTofu configuration is for.
This will be used in the PR comment for easy identification.
It must be the same as the `label` used in the corresponding [`dflook/tofu-plan`](https://github.com/dflook/terraform-github-actions/tree/main/tofu-plan) action.
required: false
default: ""
variables:
description: |
Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
replace:
description: List of resources to replace, one per line.
required: false
default: ""
target:
description: |
List of resources to apply, one per line.
The apply operation will be limited to these resources and their dependencies.
required: false
default: ""
exclude:
description: |
List of resources to exclude from the apply operation, one per line.
The apply operation will include all resources except the specified ones and their dependencies.
Requires OpenTofu 1.9+.
required: false
default: ""
destroy:
description: |
Set to `true` to destroy all resources.
This generates and applies a plan in [destroy mode](https://opentofu.org/docs/cli/commands/plan/#planning-modes).
required: false
default: "false"
refresh:
description: |
Set to `false` to skip synchronisation of the OpenTofu state with actual resources.
This will make the plan faster but may be out of date with the actual resources, which can lead to incorrect plans.
required: false
default: "true"
plan_path:
description: |
Path to a plan file to apply. This would have been generated by a previous [`dflook/tofu-plan`](https://github.com/dflook/terraform-github-actions/tree/main/tofu-plan) action.
The default behaviour when this is not set is to generate a plan from the current configuration and compare it to the plan attached to the PR comment.
If it is logically the same, the plan will be applied.
When this is set to a plan file, the plan will not be generated again. If it is the exact same plan as the one attached to the PR comment, it will be applied.
This will be faster than generating a new plan.
There are downsides to applying a stored plan:
- The plan may contain sensitive information so must be stored securely, possibly outside of GitHub.
- It does not account for any changes that have occurred since it was generated, and may no longer be correct.
- Plans must be generated and applied in strict order. Multiple open PRs will cause conflicts if they are applied out of order.
- Plans are not portable between platforms.
- OpenTofu and provider versions must match between the plan generation and apply.
When `auto_approve` is set to `true`, the plan will be applied without checking if it is the same as the one attached to the PR comment.
required: false
default: ""
auto_approve:
description: |
When set to `true`, plans are always applied.
The default is `false`, which requires plans to have been added to a pull request comment.
required: false
default: "false"
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
json_plan_path:
description: |
This is the path to the generated plan in [JSON Output Format](https://opentofu.org/docs/internals/json-format/).
The path is relative to the Actions workspace.
OpenTofu plans often contain sensitive information, so this output should be treated with care.
This won't be set if the backend type is `remote` - OpenTofu does not support saving remote plans.
text_plan_path:
description: |
This is the path to the generated plan in a human-readable format.
The path is relative to the Actions workspace.
This won't be set if `auto_approve` is true while using a `remote` backend.
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `apply-failed` - The Terraform apply operation failed.
- `plan-changed` - The approved plan is no longer accurate, so the apply will not be attempted.
- `state-locked` - The Terraform state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
run_id:
description: If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id.
json_output_path:
description: |
This is the path to all the root module outputs in a JSON file.
The path is relative to the Actions workspace.
For example, with the OpenTofu config:
```hcl
output "service_hostname" {
value = "example.com"
}
```
The file pointed to by this output will contain:
```json
{
"service_hostname": "example.com"
}
```
OpenTofu list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/apply.sh
branding:
icon: globe
color: purple
Action ID: marketplace/azure/bicep-deploy
Author: Microsoft
Publisher: azure
Repository: github.com/azure/bicep-deploy
GitHub Action for deploying to Azure
| Name | Required | Description |
|---|---|---|
type |
Required | Specifies the execution type, which can be either 'deployment' or 'deploymentStack'. |
operation |
Required | Specifies the operation to perform. For deployment, choose from 'create', 'validate', 'whatIf'. For deploymentStack, choose from 'create', 'delete', 'validate'. |
scope |
Required | Specifies the scope of the deployment or deploymentStack. For deployment, choose from 'resourceGroup', 'subscription', 'managementGroup', 'tenant'. For deploymentStack, choose from 'resourceGroup', 'subscription', 'managementGroup'. |
name |
Optional | Specifies the name of the deployment or deploymentStack. |
location |
Optional | Specifies the location of the deployment or deploymentStack. Must be provided if the 'scope' parameter is 'subscription', 'managementGroup' or 'tenant'. |
tenant-id |
Optional | Specifies the tenant ID. Required if the 'scope' parameter is 'tenant'. |
management-group-id |
Optional | Specifies the management group ID. Required if the 'scope' parameter is 'managementGroup'. |
subscription-id |
Optional | Specifies the subscription ID. Required if the 'scope' parameter is 'subscription' or 'resourceGroup'. |
resource-group-name |
Optional | Specifies the resource group name. Required if the 'scope' parameter is 'resourceGroup'. |
template-file |
Optional | Specifies the path to the template file. |
parameters-file |
Optional | Specifies the path to the parameters file (.json or .bicepparam). |
parameters |
Optional | Specifies the inline parameters to use (as json object). |
masked-outputs |
Optional | Specifies output names to mask values for. |
environment |
Optional | Specifies the Azure environment to use. Choose from 'azureCloud', 'azureChinaCloud', 'azureGermanCloud', 'azureUSGovernment'. |
what-if-exclude-change-types |
Optional | Specifies the change types to exclude from the 'What If' operation. |
validation-level |
Optional | Specifies the validation level. Only supported for deployment what-if and validate operations. Choose from 'provider', 'template', or 'providerNoRbac'. |
action-on-unmanage-resources |
Optional | Specifies the action to take on unmanaged resources. Choose from 'delete' or 'detach'. |
action-on-unmanage-resourcegroups |
Optional | Specifies the action to take on unmanaged resource groups. Choose from 'delete' or 'detach'. |
action-on-unmanage-managementgroup |
Optional | Specifies the action to take on unmanaged management groups. Choose from 'delete' or 'detach'. |
deny-settings-mode |
Optional | Specifies the mode of the deny settings. Choose from 'denyDelete', 'denyWriteAndDelete', 'none'. |
deny-settings-excluded-actions |
Optional | Specifies the excluded actions for the deny settings. |
deny-settings-excluded-principals |
Optional | Specifies the excluded principals for the deny settings. |
deny-settings-apply-to-child-scopes |
Optional | When specified, the deny setting mode configuration also applies to the child scope of the managed resources. |
bypass-stack-out-of-sync-error |
Optional | Specifies whether to bypass the stack out of sync error. Choose from 'true' or 'false'. |
description |
Optional | Specifies the description of the deploymentStack. |
tags |
Optional | Specifies the tags for the deploymentStack. |
bicep-version |
Optional | Specifies the version of Bicep to use for compilation. If not provided, the latest version will be used. Example string: '0.38.5' |
name: "Bicep Deploy"
description: "GitHub Action for deploying to Azure"
author: "Microsoft"
branding:
icon: "upload-cloud"
color: "blue"
inputs:
# Required inputs
type:
description: "Specifies the execution type, which can be either 'deployment' or 'deploymentStack'."
required: true
operation:
description: "Specifies the operation to perform. For deployment, choose from 'create', 'validate', 'whatIf'. For deploymentStack, choose from 'create', 'delete', 'validate'."
required: true
scope:
description: "Specifies the scope of the deployment or deploymentStack. For deployment, choose from 'resourceGroup', 'subscription', 'managementGroup', 'tenant'. For deploymentStack, choose from 'resourceGroup', 'subscription', 'managementGroup'."
required: true
# Optional inputs
name:
description: "Specifies the name of the deployment or deploymentStack."
required: false
location:
description: "Specifies the location of the deployment or deploymentStack. Must be provided if the 'scope' parameter is 'subscription', 'managementGroup' or 'tenant'."
required: false
tenant-id:
description: "Specifies the tenant ID. Required if the 'scope' parameter is 'tenant'."
required: false
management-group-id:
description: "Specifies the management group ID. Required if the 'scope' parameter is 'managementGroup'."
required: false
subscription-id:
description: "Specifies the subscription ID. Required if the 'scope' parameter is 'subscription' or 'resourceGroup'."
required: false
resource-group-name:
description: "Specifies the resource group name. Required if the 'scope' parameter is 'resourceGroup'."
required: false
template-file:
description: "Specifies the path to the template file."
required: false
parameters-file:
description: "Specifies the path to the parameters file (.json or .bicepparam)."
required: false
parameters:
description: "Specifies the inline parameters to use (as json object)."
required: false
masked-outputs:
description: "Specifies output names to mask values for."
required: false
environment:
description: "Specifies the Azure environment to use. Choose from 'azureCloud', 'azureChinaCloud', 'azureGermanCloud', 'azureUSGovernment'."
required: false
# What-if and validate specific inputs
what-if-exclude-change-types:
description: "Specifies the change types to exclude from the 'What If' operation."
required: false
validation-level:
description: "Specifies the validation level. Only supported for deployment what-if and validate operations. Choose from 'provider', 'template', or 'providerNoRbac'."
required: false
# DeploymentStack specific inputs
action-on-unmanage-resources:
description: "Specifies the action to take on unmanaged resources. Choose from 'delete' or 'detach'."
required: false
action-on-unmanage-resourcegroups:
description: "Specifies the action to take on unmanaged resource groups. Choose from 'delete' or 'detach'."
required: false
action-on-unmanage-managementgroup:
description: "Specifies the action to take on unmanaged management groups. Choose from 'delete' or 'detach'."
required: false
deny-settings-mode:
description: "Specifies the mode of the deny settings. Choose from 'denyDelete', 'denyWriteAndDelete', 'none'."
required: false
deny-settings-excluded-actions:
description: "Specifies the excluded actions for the deny settings."
required: false
deny-settings-excluded-principals:
description: "Specifies the excluded principals for the deny settings."
required: false
deny-settings-apply-to-child-scopes:
description: "When specified, the deny setting mode configuration also applies to the child scope of the managed resources."
required: false
bypass-stack-out-of-sync-error:
description: "Specifies whether to bypass the stack out of sync error. Choose from 'true' or 'false'."
required: false
description:
description: "Specifies the description of the deploymentStack."
required: false
tags:
description: "Specifies the tags for the deploymentStack."
required: false
bicep-version:
description: "Specifies the version of Bicep to use for compilation. If not provided, the latest version will be used. Example string: '0.38.5'"
required: false
runs:
using: node20
main: dist/index.cjs
Action ID: marketplace/axel-op/dart-package-analyzer
Author: axel-op
Publisher: axel-op
Repository: github.com/axel-op/dart-package-analyzer
Performs static analysis, linting, formatting, to compute the Pub score of your Dart/Flutter package.
| Name | Required | Description |
|---|---|---|
githubToken |
Required | Token to connect to GitHub. Use secrets.GITHUB_TOKEN |
relativePath |
Optional | Path of the package relatively to the root of the repository |
| Name | Description |
|---|---|
total |
Total score of the package |
total_max |
Maximum score the package can get |
conventions |
Score for the category 'Follow Dart file conventions' |
conventions_max |
Maximum score for the category 'Follow Dart file conventions' |
documentation |
Score for the category 'Provide documentation' |
documentation_max |
Maximum score for the category 'Provide documentation' |
platforms |
Score for the category 'Support multiple platforms' |
platforms_max |
Maximum score for the category 'Support multiple platforms' |
analysis |
Score for the category 'Static analysis' |
analysis_max |
Maximum score for the category 'Static analysis' |
dependencies |
Score for the category 'Support up-to-date dependencies' |
dependencies_max |
Maximum score for the category 'Support up-to-date dependencies' |
json_output |
The pana output in JSON format |
name: "Dart/Flutter Package Analyzer"
description: "Performs static analysis, linting, formatting, to compute the Pub score of your Dart/Flutter package."
author: "axel-op"
branding:
color: "blue"
icon: "feather"
inputs:
githubToken:
description: "Token to connect to GitHub. Use secrets.GITHUB_TOKEN"
required: true
relativePath:
description: "Path of the package relatively to the root of the repository"
required: false
default: ""
outputs:
total:
description: "Total score of the package"
total_max:
description: "Maximum score the package can get"
conventions:
description: "Score for the category 'Follow Dart file conventions'"
conventions_max:
description: "Maximum score for the category 'Follow Dart file conventions'"
documentation:
description: "Score for the category 'Provide documentation'"
documentation_max:
description: "Maximum score for the category 'Provide documentation'"
platforms:
description: "Score for the category 'Support multiple platforms'"
platforms_max:
description: "Maximum score for the category 'Support multiple platforms'"
analysis:
description: "Score for the category 'Static analysis'"
analysis_max:
description: "Maximum score for the category 'Static analysis'"
dependencies:
description: "Score for the category 'Support up-to-date dependencies'"
dependencies_max:
description: "Maximum score for the category 'Support up-to-date dependencies'"
json_output:
description: "The pana output in JSON format"
runs:
using: "docker"
image: "docker://ghcr.io/axel-op/dart-package-analyzer:v3"
entrypoint: "/dart_package_analyzer"
Action ID: marketplace/amirisback/frogo-ui-kit-deprecated
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/frogo-ui-kit-deprecated
Library Easy UI Kit Based on Design Guideline, Full and Clear Documentation, :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-UI-Kit'
description: 'Library Easy UI Kit Based on Design Guideline, Full and Clear Documentation, :books:'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/rhysd/tarpaulin-action
Author: actions-rs team
Publisher: rhysd
Repository: github.com/rhysd/tarpaulin-action
Gather Rust code coverage information with Tarpaulin
| Name | Required | Description |
|---|---|---|
version |
Required | The version of cargo-tarpaulin to install Default: latest |
args |
Optional | Extra command line arguments passed to cargo-tarpaulin |
timeout |
Optional | The maximum time in seconds without a response from a test before timeout |
run-types |
Optional | The type of the coverage run [possible values: Tests, Doctests] |
out-type |
Optional | Output format of coverage report [possible values: Json, Toml, Stdout, Xml, Html, Lcov] Default: Xml |
name: 'rust-tarpaulin'
description: 'Gather Rust code coverage information with Tarpaulin'
author: 'actions-rs team'
branding:
icon: play-circle
color: black
runs:
using: 'node12'
main: 'dist/index.js'
inputs:
version:
description: 'The version of cargo-tarpaulin to install'
required: true
default: 'latest'
args:
required: false
description: 'Extra command line arguments passed to cargo-tarpaulin'
timeout:
description: 'The maximum time in seconds without a response from a test before timeout'
required: false
run-types:
description: 'The type of the coverage run [possible values: Tests, Doctests]'
required: false
out-type:
description: 'Output format of coverage report [possible values: Json, Toml, Stdout, Xml, Html, Lcov]'
required: false
default: 'Xml'
Action ID: marketplace/azure/appservice-build-test1
Author: Unknown
Publisher: azure
Repository: github.com/azure/appservice-build-test1
Build an Azure Web App on Linux using the Oryx build system.
| Name | Required | Description |
|---|---|---|
source-directory |
Optional | Relative path (within the repository) to the source directory of the project you want to build; if no value is provided for this, the root of the repository ("GITHUB_WORKSPACE" environment variable) will be built. |
platform |
Optional | Programming platform used to build the web app; if no value is provided, Oryx will determine the platform to build with. The supported values are "dotnet", "nodejs", "php" and "python". |
platform-version |
Optional | Version of the programming platform used to build the web app; if no value is provided, Oryx will determine the version needed to build the repository. |
name: 'App Service Web App Build Action'
description: 'Build an Azure Web App on Linux using the Oryx build system.'
inputs:
source-directory:
description: 'Relative path (within the repository) to the source directory of the project you want to build; if no value is provided for this, the root of the repository ("GITHUB_WORKSPACE" environment variable) will be built.'
required: false
platform:
description: 'Programming platform used to build the web app; if no value is provided, Oryx will determine the platform to build with. The supported values are "dotnet", "nodejs", "php" and "python".'
required: false
platform-version:
description: 'Version of the programming platform used to build the web app; if no value is provided, Oryx will determine the version needed to build the repository.'
required: false
branding:
icon: 'login.svg'
color: 'blue'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.source-directory }}
- ${{ inputs.platform }}
- ${{ inputs.platform-version }}
Action ID: marketplace/amirisback/android-studio-game-dev
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-studio-game-dev
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Game Dev'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/container-scan
Author: Unknown
Publisher: azure
Repository: github.com/azure/container-scan
Scan container images for vulnerabilities and CIS guidelines
| Name | Required | Description |
|---|---|---|
image-name |
Required | Docker image to scan |
severity-threshold |
Optional | (Optional) Minimum severity threshold set to control flagging of the vulnerabilities found during the scan. The available levels are: (UNKNOWN, LOW, MEDIUM, HIGH, CRITICAL); if you set the severity-threshold to be `MEDIUM` every CVE found of a level higher than or equal to `MEDIUM` would be displayed Default: HIGH |
username |
Optional | Username to authenticate to the Docker registry |
password |
Optional | Password to authenticate to the Docker registry |
token |
Required | Github token Default: ${{ github.token }} |
trivy-version |
Optional | Version of Trivy to run Default: latest |
run-quality-checks |
Optional | Add additional checks to ensure the image is secure and follows best practices and CIS standards Default: true |
| Name | Description |
|---|---|
scan-report-path |
File path where the scan results are stored |
check-run-url |
URL of the check run. Overview about finding of vulnerabilities and best-practices. |
name: 'Container image scan'
description: 'Scan container images for vulnerabilities and CIS guidelines'
inputs:
image-name:
description: 'Docker image to scan'
required: true
severity-threshold:
description: '(Optional) Minimum severity threshold set to control flagging of the vulnerabilities found during the scan. The available levels are: (UNKNOWN, LOW, MEDIUM, HIGH, CRITICAL); if you set the severity-threshold to be `MEDIUM` every CVE found of a level higher than or equal to `MEDIUM` would be displayed'
required: false
default: 'HIGH'
username:
description: 'Username to authenticate to the Docker registry'
required: false
password:
description: 'Password to authenticate to the Docker registry'
required: false
token:
description: 'Github token'
default: ${{ github.token }}
required: true
trivy-version:
description: 'Version of Trivy to run'
default: 'latest'
required: false
run-quality-checks:
description: 'Add additional checks to ensure the image is secure and follows best practices and CIS standards'
default: 'true'
required: false
outputs:
scan-report-path:
description: 'File path where the scan results are stored'
check-run-url:
description: 'URL of the check run. Overview about finding of vulnerabilities and best-practices.'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/julia-actions/add-julia-registry
Author: Chris de Graaf
Publisher: julia-actions
Repository: github.com/julia-actions/add-julia-registry
Clone a private Julia registry and configure Pkg access
| Name | Required | Description |
|---|---|---|
registry |
Required | Registry to clone (owner/repo) |
protocol |
Optional | Protocol to use when cloning GitHub repositories. Either `ssh` or `https`. Default: ssh |
ssh-key |
Optional | SSH private key contents when cloning with SSH. |
key |
Optional | Deprecated input which was replaced by `ssh-key`. |
github-token |
Optional | The GitHub token to use when cloning with HTTPS. Default: ${{ github.token }} |
name: Add Julia Registry
author: Chris de Graaf
description: Clone a private Julia registry and configure Pkg access
inputs:
registry:
description: Registry to clone (owner/repo)
required: true
protocol:
description: Protocol to use when cloning GitHub repositories. Either `ssh` or `https`.
default: ssh
ssh-key:
description: SSH private key contents when cloning with SSH.
required: false
key:
description: Deprecated input which was replaced by `ssh-key`.
required: false
github-token:
description: The GitHub token to use when cloning with HTTPS.
default: ${{ github.token }}
runs:
using: node20
main: main.js
post: post.js
branding:
icon: package
color: red
Action ID: marketplace/actions/setup-ruby
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-ruby
Set up a specific version of Ruby and add the command-line tools to the PATH
| Name | Required | Description |
|---|---|---|
ruby-version |
Optional | Version range or exact version of a Ruby version to use. Default: >= 2.4 |
version |
Optional | Deprecated. Use ruby-version instead. Will not be supported after October 1, 2019 |
name: 'Setup Ruby'
description: 'Set up a specific version of Ruby and add the command-line tools to the PATH'
author: 'GitHub'
inputs:
ruby-version:
description: 'Version range or exact version of a Ruby version to use.'
default: '>= 2.4'
# Deprecated option, do not use. Will not be supported after October 1, 2019
version:
description: 'Deprecated. Use ruby-version instead. Will not be supported after October 1, 2019'
deprecationMessage: 'The version property will not be supported after October 1, 2019. Use ruby-version instead'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/VeryGoodOpenSource/very_good_coverage
Author: Unknown
Publisher: VeryGoodOpenSource
Repository: github.com/VeryGoodOpenSource/very_good_coverage
Enforce LCOV Coverage Thresholds
| Name | Required | Description |
|---|---|---|
path |
Optional | lcov path Default: ./coverage/lcov.info |
min_coverage |
Optional | minimum coverage percentage allowed Default: 100 |
exclude |
Optional | list of files you would like to exclude from coverage |
name: 'Very Good Coverage'
description: 'Enforce LCOV Coverage Thresholds'
branding:
icon: check-square
color: green
inputs:
path:
description: 'lcov path'
required: false
default: './coverage/lcov.info'
min_coverage:
description: 'minimum coverage percentage allowed'
required: false
default: 100
exclude:
description: 'list of files you would like to exclude from coverage'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/amirisback/consumable-code-covid-19-api
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/consumable-code-covid-19-api
consumable-code-covid-19-api for your android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'consumable-code-covid-19-api'
description: 'consumable-code-covid-19-api for your android apps'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/CodelyTV/no-pull-requests
Author: Unknown
Publisher: CodelyTV
Repository: github.com/CodelyTV/no-pull-requests
Automatically closes any open pull request
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Required | GitHub token |
message |
Optional | Message to show when the pull request is closed Default: 🙅 We do not accept Pull Requests! |
name: 'No Pull Requests'
description: 'Automatically closes any open pull request'
branding:
icon: 'alert-octagon'
color: 'red'
inputs:
GITHUB_TOKEN:
description: 'GitHub token'
required: true
message:
description: 'Message to show when the pull request is closed'
required: false
default: '🙅 We do not accept Pull Requests!'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.GITHUB_TOKEN }}
- ${{ inputs.message }}
Action ID: marketplace/atlassian/gajira-find-issue-key
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-find-issue-key
Find an issue inside event
| Name | Required | Description |
|---|---|---|
string |
Optional | Provide a string to extract issue key from |
from |
Optional | Find from predefined place (should be either 'branch', or 'commits') Default: commits |
| Name | Description |
|---|---|
issue |
Key of the found issue |
name: Jira Find issue key
description: Find an issue inside event
branding:
icon: 'book-open'
color: 'blue'
inputs:
string:
description: Provide a string to extract issue key from
required: false
from:
description: Find from predefined place (should be either 'branch', or 'commits')
required: false
default: commits
outputs:
issue:
description: Key of the found issue
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/irgolic/pull-request-comment-trigger
Author: Unknown
Publisher: irgolic
Repository: github.com/irgolic/pull-request-comment-trigger
Look for a "trigger word" in a pull-request description or comment, so that later steps can know whether or not to run.
| Name | Required | Description |
|---|---|---|
reaction |
Optional | If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket". |
trigger |
Required | The string to look for in pull-request descriptions and comments. For example "#build/android" |
prefix_only |
Optional | If 'true', the trigger must match the start of the comment. Default: false |
| Name | Description |
|---|---|
triggered |
the string 'true' if the trigger was found, otherwise the string 'false' |
comment_body |
The comment body. |
name: 'Pull Request Comment Trigger fork'
description: 'Look for a "trigger word" in a pull-request description or comment, so that later steps can know whether or not to run.'
inputs:
reaction:
description: If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket".
required: false
default: ""
trigger:
description: 'The string to look for in pull-request descriptions and comments. For example "#build/android"'
required: true
prefix_only:
description: If 'true', the trigger must match the start of the comment.
required: false
default: "false"
outputs:
triggered:
description: the string 'true' if the trigger was found, otherwise the string 'false'
comment_body:
description: The comment body.
runs:
using: 'node16'
main: 'dist/index.js'
branding:
icon: check-circle
color: red
Action ID: marketplace/peter-evans/ssh-agent
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/ssh-agent
Run `ssh-agent` and load an SSH key to access other private repositories
| Name | Required | Description |
|---|---|---|
ssh-private-key |
Required | Private SSH key to register in the SSH agent |
ssh-auth-sock |
Optional | Where to place the SSH Agent auth socket |
name: 'webfactory/ssh-agent'
description: 'Run `ssh-agent` and load an SSH key to access other private repositories'
inputs:
ssh-private-key:
description: 'Private SSH key to register in the SSH agent'
required: true
ssh-auth-sock:
description: 'Where to place the SSH Agent auth socket'
runs:
using: 'node12'
main: 'dist/index.js'
post: 'dist/cleanup.js'
post-if: 'always()'
branding:
icon: loader
color: 'yellow'
Action ID: marketplace/peter-evans/close-issue
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/close-issue
A GitHub action to close an issue
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub auth token Default: ${{ github.token }} |
repository |
Optional | The GitHub repository containing the issue Default: ${{ github.repository }} |
issue-number |
Optional | The number of the issue to close Default: ${{ github.event.issue.number }} |
close-reason |
Optional | Reason for closing the issue, "completed" or "not_planned" Default: completed |
comment |
Optional | A comment to make on the issue before closing |
labels |
Optional | A comma or newline separated list of labels. |
name: 'Close Issue'
description: 'A GitHub action to close an issue'
inputs:
token:
required: false
description: 'GitHub auth token'
default: ${{ github.token }}
repository:
required: false
description: 'The GitHub repository containing the issue'
default: ${{ github.repository }}
issue-number:
required: false
description: 'The number of the issue to close'
default: ${{ github.event.issue.number }}
close-reason:
required: false
description: 'Reason for closing the issue, "completed" or "not_planned"'
default: 'completed'
comment:
required: false
description: 'A comment to make on the issue before closing'
labels:
required: false
description: 'A comma or newline separated list of labels.'
runs:
using: composite
steps:
- name: Set parameters
id: params
shell: bash
run: |
if [ -n "${{ inputs.comment }}" ]; then
comment="--comment \"${{ inputs.comment }}\""
delimiter="$(openssl rand -hex 8)"
echo "comment<<$delimiter" >> $GITHUB_OUTPUT
echo "$comment" >> $GITHUB_OUTPUT
echo "$delimiter" >> $GITHUB_OUTPUT
fi
if [ "${{ inputs.close-reason }}" == "not_planned" ]; then
echo close-reason="not planned" >> $GITHUB_OUTPUT
else
echo close-reason="completed" >> $GITHUB_OUTPUT
fi
# Convert labels to comma separated list
if [ -n "${{ inputs.labels }}" ]; then
labels=$(echo "${{ inputs.labels }}" | tr '\n' ',' | sed 's/,$//')
# Remove trailing comma and whitespace
labels=$(echo $labels | sed 's/,$//' | sed 's/[[:space:]]//g')
echo labels=$labels >> $GITHUB_OUTPUT
fi
- name: Close Issue
shell: bash
run: |
gh issue close -R "${{ inputs.repository }}" \
--reason "${{ steps.params.outputs.close-reason }}" \
${{ steps.params.outputs.comment }} \
"${{ inputs.issue-number }}"
env:
GH_TOKEN: ${{ inputs.token }}
- name: Add Labels
if: steps.params.outputs.labels != ''
shell: bash
run: |
gh issue edit -R "${{ inputs.repository }}" \
--add-label "${{ steps.params.outputs.labels }}" \
"${{ inputs.issue-number }}"
env:
GH_TOKEN: ${{ inputs.token }}
branding:
icon: 'git-pull-request'
color: 'gray-dark'
Action ID: marketplace/atlassian/gajira-transition
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-transition
Change status of specific Jira issue
| Name | Required | Description |
|---|---|---|
issue |
Required | Key of the issue to be transitioned |
transition |
Optional | The transition to apply to the issue, e.g. 'In Progress' |
transitionId |
Optional | The id of a transition to apply to the issue |
name: Jira Issue Transition
description: Change status of specific Jira issue
branding:
icon: 'chevron-right'
color: 'blue'
inputs:
issue:
description: Key of the issue to be transitioned
required: true
transition:
description: The transition to apply to the issue, e.g. 'In Progress'
required: false
transitionId:
description: The id of a transition to apply to the issue
required: false
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/github/ai-moderator
Author: GitHub
Publisher: github
Repository: github.com/github/ai-moderator
Scan new issues and comments with GitHub Models AI prompts; label or hide spam automatically.
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub token with `issues`, `pull-requests`, and `models: read` permissions Default: ${{ github.token }} |
spam-label |
Optional | Label to add when generic spam is detected Default: spam |
ai-label |
Optional | Label to add when AI-generated content is detected Default: ai-generated |
minimize-detected-comments |
Optional | Whether to minimize comments detected as spam Default: true |
dry-run |
Optional | If true, only evaluate without adding labels or minimizing comments Default: false |
custom-prompt-path |
Optional | Path to a custom YAML prompt file in your repository (relative to repository root) |
enable-spam-detection |
Optional | Enable built-in spam detection prompt Default: true |
enable-link-spam-detection |
Optional | Enable built-in link spam detection prompt Default: true |
enable-ai-detection |
Optional | Enable built-in AI-generated content detection prompt Default: true |
endpoint |
Optional | The endpoint to use for inference Default: https://models.github.ai/inference |
name: 'AI Issue and Comment Moderator'
description:
'Scan new issues and comments with GitHub Models AI prompts; label or hide
spam automatically.'
author: 'GitHub'
branding:
icon: 'shield'
color: 'purple'
inputs:
token:
description:
'GitHub token with `issues`, `pull-requests`, and `models: read`
permissions'
required: false
default: ${{ github.token }}
spam-label:
description: 'Label to add when generic spam is detected'
required: false
default: 'spam'
ai-label:
description: 'Label to add when AI-generated content is detected'
required: false
default: 'ai-generated'
minimize-detected-comments:
description: 'Whether to minimize comments detected as spam'
required: false
default: 'true'
dry-run:
description:
'If true, only evaluate without adding labels or minimizing comments'
required: false
default: 'false'
custom-prompt-path:
description:
'Path to a custom YAML prompt file in your repository (relative to
repository root)'
required: false
enable-spam-detection:
description: 'Enable built-in spam detection prompt'
required: false
default: 'true'
enable-link-spam-detection:
description: 'Enable built-in link spam detection prompt'
required: false
default: 'true'
enable-ai-detection:
description: 'Enable built-in AI-generated content detection prompt'
required: false
default: 'true'
endpoint:
description: The endpoint to use for inference
required: false
default: 'https://models.github.ai/inference'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/repo-sync/pull-request
Author: Wei He <github@weispot.com>
Publisher: repo-sync
Repository: github.com/repo-sync/pull-request
⤵️ Create pull request
| Name | Required | Description |
|---|---|---|
destination_repository |
Optional | Repository (user/repo) to create the pull request in, falls back to checkout repository or triggered repository |
source_branch |
Optional | Branch name to pull from, default is triggered branch |
destination_branch |
Optional | Branch name to sync to in this repo, default is master Default: master |
pr_title |
Optional | Pull request title |
pr_body |
Optional | Pull request body |
pr_template |
Optional | Pull request template |
pr_reviewer |
Optional | Pull request reviewers, comma-separated list (no spaces) |
pr_assignee |
Optional | Pull request assignees, comma-separated list (no spaces) |
pr_label |
Optional | Pull request labels, comma-separated list (no spaces) |
pr_milestone |
Optional | Pull request milestone |
pr_draft |
Optional | Draft pull request |
pr_allow_empty |
Optional | Create PR even if no changes |
working_directory |
Optional | Change working directory |
github_token |
Required | GitHub token secret Default: ${{ github.token }} |
debug |
Optional | Bash set -x debugging mode |
| Name | Description |
|---|---|
pr_url |
Pull request URL |
pr_number |
Pull request number |
pr_created |
Boolean string indicating if a pull request was created from the action run |
has_changed_files |
Boolean string indicating whether any file has been changed |
name: GitHub Pull Request Action
author: Wei He <github@weispot.com>
description: ⤵️ Create pull request
branding:
icon: 'git-pull-request'
color: 'gray-dark'
inputs:
destination_repository:
description: Repository (user/repo) to create the pull request in, falls back to checkout repository or triggered repository
required: false
source_branch:
description: Branch name to pull from, default is triggered branch
required: false
destination_branch:
description: Branch name to sync to in this repo, default is master
required: false
default: master
pr_title:
description: Pull request title
required: false
pr_body:
description: Pull request body
required: false
pr_template:
description: Pull request template
required: false
pr_reviewer:
description: Pull request reviewers, comma-separated list (no spaces)
required: false
pr_assignee:
description: Pull request assignees, comma-separated list (no spaces)
required: false
pr_label:
description: Pull request labels, comma-separated list (no spaces)
required: false
pr_milestone:
description: Pull request milestone
required: false
pr_draft:
description: Draft pull request
required: false
pr_allow_empty:
description: Create PR even if no changes
required: false
working_directory:
description: Change working directory
required: false
github_token:
description: GitHub token secret
required: true
default: ${{ github.token }}
debug:
description: Bash set -x debugging mode
required: false
outputs:
pr_url:
description: 'Pull request URL'
pr_number:
description: 'Pull request number'
pr_created:
description: 'Boolean string indicating if a pull request was created from the action run'
has_changed_files:
description: 'Boolean string indicating whether any file has been changed'
runs:
using: 'docker'
image: Dockerfile
env:
GITHUB_TOKEN: ${{ inputs.github_token }}
Action ID: marketplace/actions/attest-sbom
Author: GitHub
Publisher: actions
Repository: github.com/actions/attest-sbom
Generate SBOM attestations for build artifacts
| Name | Required | Description |
|---|---|---|
subject-path |
Optional | Path to the artifact serving as the subject of the attestation. Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". May contain a glob pattern or list of paths (total subject count cannot exceed 1024). |
subject-digest |
Optional | SHA256 digest of the subject for the attestation. Must be in the form "sha256:hex_digest" (e.g. "sha256:abc123..."). Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". |
subject-name |
Optional | Subject name as it should appear in the attestation. Required when identifying the subject with the "subject-digest" input. |
subject-checksums |
Optional | Path to checksums file containing digest and name of subjects for attestation. Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". |
sbom-path |
Required | Path to the JSON-formatted SBOM file to attest. File size cannot exceed 16MB. |
push-to-registry |
Optional | Whether to push the provenance statement to the image registry. Requires that the "subject-name" parameter specify the fully-qualified image name and that the "subject-digest" parameter be specified. Defaults to false. |
show-summary |
Optional | Whether to attach a list of generated attestations to the workflow run summary page. Defaults to true.
Default: True |
github-token |
Optional | The GitHub token used to make authenticated API requests.
Default: ${{ github.token }} |
| Name | Description |
|---|---|
bundle-path |
The path to the file containing the attestation bundle. |
attestation-id |
The ID of the attestation. |
attestation-url |
The URL for the attestation summary. |
name: 'Attest SBOM'
description: 'Generate SBOM attestations for build artifacts'
author: 'GitHub'
branding:
color: 'blue'
icon: 'paperclip'
inputs:
subject-path:
description: >
Path to the artifact serving as the subject of the attestation. Must
specify exactly one of "subject-path", "subject-digest", or
"subject-checksums". May contain a glob pattern or list of paths (total
subject count cannot exceed 1024).
required: false
subject-digest:
description: >
SHA256 digest of the subject for the attestation. Must be in the form
"sha256:hex_digest" (e.g. "sha256:abc123..."). Must specify exactly one of
"subject-path", "subject-digest", or "subject-checksums".
required: false
subject-name:
description: >
Subject name as it should appear in the attestation. Required when
identifying the subject with the "subject-digest" input.
subject-checksums:
description: >
Path to checksums file containing digest and name of subjects for
attestation. Must specify exactly one of "subject-path", "subject-digest",
or "subject-checksums".
required: false
sbom-path:
description: >
Path to the JSON-formatted SBOM file to attest. File size cannot exceed
16MB.
required: true
push-to-registry:
description: >
Whether to push the provenance statement to the image registry. Requires
that the "subject-name" parameter specify the fully-qualified image name
and that the "subject-digest" parameter be specified. Defaults to false.
default: false
required: false
show-summary:
description: >
Whether to attach a list of generated attestations to the workflow run
summary page. Defaults to true.
default: true
required: false
github-token:
description: >
The GitHub token used to make authenticated API requests.
default: ${{ github.token }}
required: false
outputs:
bundle-path:
description: 'The path to the file containing the attestation bundle.'
value: ${{ steps.attest.outputs.bundle-path }}
attestation-id:
description: 'The ID of the attestation.'
value: ${{ steps.attest.outputs.attestation-id }}
attestation-url:
description: 'The URL for the attestation summary.'
value: ${{ steps.attest.outputs.attestation-url }}
runs:
using: 'composite'
steps:
- uses: actions/attest-sbom/predicate@55e972012fb8695c1b0049174547f1fcb4baa8a5 # predicate@2.0.0
id: generate-sbom-predicate
with:
sbom-path: ${{ inputs.sbom-path }}
- uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
id: attest
env:
NODE_OPTIONS: '--max-http-header-size=32768'
with:
subject-path: ${{ inputs.subject-path }}
subject-digest: ${{ inputs.subject-digest }}
subject-name: ${{ inputs.subject-name }}
subject-checksums: ${{ inputs.subject-checksums }}
predicate-type:
${{ steps.generate-sbom-predicate.outputs.predicate-type }}
predicate-path:
${{ steps.generate-sbom-predicate.outputs.predicate-path }}
push-to-registry: ${{ inputs.push-to-registry }}
show-summary: ${{ inputs.show-summary }}
github-token: ${{ inputs.github-token }}
Action ID: marketplace/rhysd/github-action-benchmark
Author: github-action-benchmark developers <https://github.com/benchmark-action>
Publisher: rhysd
Repository: github.com/rhysd/github-action-benchmark
Continuous Benchmark using GitHub pages as dash board for keeping performance
| Name | Required | Description |
|---|---|---|
name |
Required | Name of the benchmark. This value must be identical among all benchmarks Default: Benchmark |
tool |
Required | Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter" |
output-file-path |
Required | A path to file which contains the benchmark output |
gh-pages-branch |
Required | Branch for gh-pages Default: gh-pages |
gh-repository |
Optional | Url to an optional different repository to store benchmark results |
benchmark-data-dir-path |
Required | Path to directory which contains benchmark files on GitHub pages branch Default: dev/bench |
github-token |
Optional | GitHub API token to pull/push GitHub pages branch and deploy GitHub pages. For public repository, this must be personal access token for now. Please read README.md for more details |
ref |
Optional | optional Ref to use when finding commit |
auto-push |
Optional | Push GitHub Pages branch to remote automatically. This option requires github-token input |
skip-fetch-gh-pages |
Optional | Skip pulling GitHub Pages branch before generating an auto commit |
comment-always |
Optional | Leave a comment with benchmark result comparison. To enable this feature, github-token input must be given as well |
summary-always |
Optional | Leave a job summary with benchmark result comparison |
save-data-file |
Optional | Save the benchmark data to external file Default: True |
comment-on-alert |
Optional | Leave an alert comment when current benchmark result is worse than previous. Threshold is specified with alert-threshold input. To enable this feature, github-token input must be given as well |
alert-threshold |
Optional | Threshold which determines if an alert should happen or not. Percentage value such as "150%". For example, 150% means that an alert happens when current benchmark result is 1.5x worse than previous Default: 200% |
fail-on-alert |
Optional | Workflow fails when alert comment happens |
fail-threshold |
Optional | Threshold which determines if the current workflow fails. Format is the same as alert-threshold input. If this value is not specified, the same value as alert-threshold is used |
alert-comment-cc-users |
Optional | Comma separated GitHub user names which start with @ (e.g. "@foo,@bar"). They will be mentioned in commit comment for alert. |
external-data-json-path |
Optional | JSON data file for storing benchmark results. When this input is set, github-action-benchmark no longer uses Git branch to store data. Instead, it reads and appends benchmark data from/to the file. User must store the file anywhere |
max-items-in-chart |
Optional | Max data points in a benchmark chart to avoid making the chart too busy. Value must be unsigned integer. No limit by default |
name: 'Continuous Benchmark'
author: 'github-action-benchmark developers <https://github.com/benchmark-action>'
description: 'Continuous Benchmark using GitHub pages as dash board for keeping performance'
branding:
icon: 'fast-forward'
color: 'blue'
inputs:
name:
description: 'Name of the benchmark. This value must be identical among all benchmarks'
required: true
default: 'Benchmark'
tool:
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter"'
required: true
output-file-path:
description: 'A path to file which contains the benchmark output'
required: true
gh-pages-branch:
description: 'Branch for gh-pages'
required: true
default: 'gh-pages'
gh-repository:
description: 'Url to an optional different repository to store benchmark results'
required: false
benchmark-data-dir-path:
description: 'Path to directory which contains benchmark files on GitHub pages branch'
required: true
default: 'dev/bench'
github-token:
description: 'GitHub API token to pull/push GitHub pages branch and deploy GitHub pages. For public repository, this must be personal access token for now. Please read README.md for more details'
required: false
ref:
description: 'optional Ref to use when finding commit'
required: false
auto-push:
description: 'Push GitHub Pages branch to remote automatically. This option requires github-token input'
required: false
default: false
skip-fetch-gh-pages:
description: 'Skip pulling GitHub Pages branch before generating an auto commit'
required: false
default: false
comment-always:
description: 'Leave a comment with benchmark result comparison. To enable this feature, github-token input must be given as well'
required: false
default: false
summary-always:
description: 'Leave a job summary with benchmark result comparison'
required: false
default: false
save-data-file:
description: 'Save the benchmark data to external file'
required: false
default: true
comment-on-alert:
description: 'Leave an alert comment when current benchmark result is worse than previous. Threshold is specified with alert-threshold input. To enable this feature, github-token input must be given as well'
required: false
default: false
alert-threshold:
description: 'Threshold which determines if an alert should happen or not. Percentage value such as "150%". For example, 150% means that an alert happens when current benchmark result is 1.5x worse than previous'
required: false
default: '200%'
fail-on-alert:
description: 'Workflow fails when alert comment happens'
required: false
# Note: Set to false by default since this action does not push to remote by default. When workflow
# fails and auto-push is not set, there is no chance to push the result to remote.
default: false
fail-threshold:
description: 'Threshold which determines if the current workflow fails. Format is the same as alert-threshold input. If this value is not specified, the same value as alert-threshold is used'
required: false
alert-comment-cc-users:
description: 'Comma separated GitHub user names which start with @ (e.g. "@foo,@bar"). They will be mentioned in commit comment for alert.'
required: false
external-data-json-path:
description: 'JSON data file for storing benchmark results. When this input is set, github-action-benchmark no longer uses Git branch to store data. Instead, it reads and appends benchmark data from/to the file. User must store the file anywhere'
required: false
max-items-in-chart:
description: 'Max data points in a benchmark chart to avoid making the chart too busy. Value must be unsigned integer. No limit by default'
required: false
runs:
using: 'node20'
main: 'dist/src/index.js'
Action ID: marketplace/mheap/github-action-explore-debug
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-explore-debug
Show available data for any Action run
| Name | Required | Description |
|---|---|---|
jq_selector |
Optional | The JQ selector to execute Default: . |
name: Explore Debug
description: Show available data for any Action run
runs:
using: Docker
image: Dockerfile
inputs:
jq_selector:
description: "The JQ selector to execute"
required: false
default: "."
Action ID: marketplace/azure/LAMPStack-Azure-VirtualMachine
Author: Unknown
Publisher: azure
Repository: github.com/azure/LAMPStack-Azure-VirtualMachine
This action helps create an Azure Virtual Machine and deploy LAMP stack on it
| Name | Required | Description |
|---|---|---|
client-id |
Required | Client id to login to azure |
tenant-id |
Required | Tenant id to login to azure |
subscription-id |
Required | Subscription id to be used with your az login |
resource-group-name |
Required | Resource group to deploy your resources to |
admin-username |
Required | Admin username to login to vm |
admin-password |
Required | Admin password to login to vm |
name: 'Deploy LAMP stack on Azure Virtual Machine'
description: 'This action helps create an Azure Virtual Machine and deploy LAMP stack on it'
branding:
icon: 'play-circle'
color: 'blue'
inputs:
client-id:
description: 'Client id to login to azure'
required: true
tenant-id:
description: 'Tenant id to login to azure'
required: true
subscription-id:
description: 'Subscription id to be used with your az login'
required: true
resource-group-name:
description: 'Resource group to deploy your resources to'
required: true
admin-username:
description: 'Admin username to login to vm'
required: true
admin-password:
description: 'Admin password to login to vm'
required: true
runs:
using: 'composite'
steps:
- name: 'Checkout master'
uses: actions/checkout@v3
- name: az cli login
uses: azure/login@v1
with:
client-id: ${{ inputs.client-id }}
tenant-id: ${{ inputs.tenant-id }}
subscription-id: ${{ inputs.subscription-id }}
enable-AzPSSession: false
- name: 'Accept Bitnami marketplace terms'
shell: bash
run: |
az vm image terms accept --urn bitnami:lampstack:5-6:latest
- name: 'Az deploy - LAMP on VM Bitnami'
uses: azure/arm-deploy@v1
with:
subscriptionId: ${{ inputs.subscription-id }}
resourceGroupName: ${{ inputs.resource-group-name }}
template: https://raw.githubusercontent.com/vaibbavisk20/deploy_lamp_vm_azure/main/vm_template.json
parameters: https://raw.githubusercontent.com/vaibbavisk20/deploy_lamp_vm_azure/main/vm_parameters.json
adminUsername=${{ inputs.admin-username }}
adminPassword=${{ inputs.admin-password }}
failOnStdErr: false
- name: Fetch JSON file from remote repository
shell: bash
run: |
json_url="https://raw.githubusercontent.com/vaibbavisk20/deploy_lamp_vm_azure/main/vm_parameters.json"
curl -sSLO $json_url
- name: Print parameter value
shell: bash
run: |
json_file="vm_parameters.json"
# vm name
virtual_machine_name=".parameters.virtualMachineName.value"
result=$(jq -r "$virtual_machine_name" "$json_file")
echo "Virtual Machine Properties"
echo "Name: $result"
# region
location=".parameters.location.value"
result=$(jq -r "$location" "$json_file")
echo "Location: $result"
# vm size
virtual_machine_size=".parameters.virtualMachineSize.value"
result=$(jq -r "$virtual_machine_size" "$json_file")
echo "VM Size: $result"
Action ID: marketplace/dflook/terraform-destroy
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-destroy
Destroys all resources in a Terraform workspace
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform root module directory. Default: . |
workspace |
Optional | The name of the Terraform workspace to destroy. Default: default |
variables |
Optional | Variables to set for the terraform destroy. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `destroy-failed` - The Terraform destroy operation failed. - `state-locked` - The Terraform state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
name: terraform-destroy
description: Destroys all resources in a Terraform workspace
author: Daniel Flook
inputs:
path:
description: The path to the Terraform root module directory.
required: false
default: "."
workspace:
description: The name of the Terraform workspace to destroy.
required: false
default: "default"
variables:
description: |
Variables to set for the terraform destroy. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `destroy-failed` - The Terraform destroy operation failed.
- `state-locked` - The Terraform state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/destroy.sh
branding:
icon: globe
color: purple
Action ID: marketplace/amirisback/android-delegate
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-delegate
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/mheap/vale-action
Author: jdkato
Publisher: mheap
Repository: github.com/mheap/vale-action
The official GitHub Action for Vale -- install, manage, and run Vale with ease.
| Name | Required | Description |
|---|---|---|
version |
Optional | The Vale CLI version to install. Default: latest |
files |
Optional | The files to lint: "all" or "<some_folder>". Default: all |
debug |
Optional | Log debugging information to stdout. Default: false |
reporter |
Optional | Reporter of reviewdog command [github-pr-check,github-pr-review,github-check]. Default: github-pr-check |
fail_on_error |
Optional | Exit code for reviewdog when errors are found [true,false]
Default is `false`.
Default: false |
level |
Optional | Report level for reviewdog [info,warning,error]. Default: error |
filter_mode |
Optional | Filtering for the reviewdog command [added,diff_context,file,nofilter].
Default is added.
Default: added |
vale_flags |
Optional | Space-delimited list of flags for the Vale CLI. |
separator |
Optional | Split character for input strings. |
reviewdog_url |
Optional | The URL to a tar.gz build of reviewdog to use in the action |
token |
Optional | The GitHub token to use. Default: ${{ github.token }} |
name: Vale Linter
description: The official GitHub Action for Vale -- install, manage, and run Vale with ease.
author: jdkato
branding:
icon: check
color: green
inputs:
version:
description: "The Vale CLI version to install."
required: false
default: "latest"
files:
description: 'The files to lint: "all" or "<some_folder>".'
required: false
default: all
debug:
description: "Log debugging information to stdout."
required: false
default: "false"
reporter:
description: "Reporter of reviewdog command [github-pr-check,github-pr-review,github-check]."
required: false
default: "github-pr-check"
fail_on_error:
description: |
Exit code for reviewdog when errors are found [true,false]
Default is `false`.
required: false
default: "false"
level:
description: "Report level for reviewdog [info,warning,error]."
required: false
default: "error"
filter_mode:
description: |
Filtering for the reviewdog command [added,diff_context,file,nofilter].
Default is added.
required: false
default: "added"
vale_flags:
description: "Space-delimited list of flags for the Vale CLI."
required: false
default: ""
separator:
description: "Split character for input strings."
required: false
default: ""
reviewdog_url:
description: "The URL to a tar.gz build of reviewdog to use in the action"
required: false
default: ""
token:
description: "The GitHub token to use."
required: false
default: ${{ github.token }}
runs:
using: "node20"
main: "lib/main.js"
Action ID: marketplace/amirisback/impl-automated-build
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/impl-automated-build
Automated build android app bundle and apk with github action
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'automated-build-android-app-with-github-action'
description: 'Automated build android app bundle and apk with github action'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/jamesgeorge007/github-activity-readme
Author: jamesgeorge007
Publisher: jamesgeorge007
Repository: github.com/jamesgeorge007/github-activity-readme
Updates README with the recent GitHub activity of a user
| Name | Required | Description |
|---|---|---|
GH_USERNAME |
Optional | Your GitHub username Default: ${{ github.repository_owner }} |
COMMIT_NAME |
Optional | Name of the committer Default: github-actions[bot] |
COMMIT_EMAIL |
Optional | Email of the committer Default: 41898282+github-actions[bot]@users.noreply.github.com |
COMMIT_MSG |
Optional | Commit message used while committing to the repo Default: :zap: Update README with the recent activity |
MAX_LINES |
Optional | The maximum number of lines populated in your readme file Default: 5 |
TARGET_FILE |
Optional | The file location to write changes to Default: README.md |
EMPTY_COMMIT_MSG |
Optional | Commit message used when there are no updates Default: :memo: empty commit to keep workflow active after 60 days of no activity |
name: GitHub - Activity - Readme
description: Updates README with the recent GitHub activity of a user
author: jamesgeorge007
inputs:
GH_USERNAME:
description: "Your GitHub username"
default: ${{ github.repository_owner }}
required: false
COMMIT_NAME:
description: "Name of the committer"
default: "github-actions[bot]"
required: false
COMMIT_EMAIL:
description: "Email of the committer"
default: "41898282+github-actions[bot]@users.noreply.github.com"
required: false
COMMIT_MSG:
description: "Commit message used while committing to the repo"
default: ":zap: Update README with the recent activity"
required: false
MAX_LINES:
description: "The maximum number of lines populated in your readme file"
default: 5
required: false
TARGET_FILE:
description: "The file location to write changes to"
default: "README.md"
required: false
EMPTY_COMMIT_MSG:
description: "Commit message used when there are no updates"
default: ":memo: empty commit to keep workflow active after 60 days of no activity"
required: false
branding:
color: yellow
icon: activity
runs:
using: node20
main: dist/index.js
Action ID: marketplace/azure/login
Author: Unknown
Publisher: azure
Repository: github.com/azure/login
Authenticate to Azure and run your Azure CLI or Azure PowerShell based actions or scripts.
| Name | Required | Description |
|---|---|---|
creds |
Optional | Paste output of `az ad sp create-for-rbac` as value of secret variable: AZURE_CREDENTIALS |
client-id |
Optional | ClientId of the Azure Service principal created. |
tenant-id |
Optional | TenantId of the Azure Service principal created. |
subscription-id |
Optional | Azure subscriptionId |
enable-AzPSSession |
Optional | Set this value to true to enable Azure PowerShell Login in addition to Azure CLI login |
environment |
Optional | Name of the environment. Supported values are azurecloud, azurestack, azureusgovernment, azurechinacloud, azuregermancloud. Default being azurecloud Default: azurecloud |
allow-no-subscriptions |
Optional | Set this value to true to enable support for accessing tenants without subscriptions |
audience |
Optional | Provide audience field for access-token. Default value is api://AzureADTokenExchange Default: api://AzureADTokenExchange |
auth-type |
Optional | The type of authentication. Supported values are SERVICE_PRINCIPAL, IDENTITY. Default value is SERVICE_PRINCIPAL Default: SERVICE_PRINCIPAL |
# Login to Azure subscription
name: 'Azure Login'
description: 'Authenticate to Azure and run your Azure CLI or Azure PowerShell based actions or scripts.'
inputs:
creds:
description: 'Paste output of `az ad sp create-for-rbac` as value of secret variable: AZURE_CREDENTIALS'
required: false
client-id:
description: 'ClientId of the Azure Service principal created.'
required: false
tenant-id:
description: 'TenantId of the Azure Service principal created.'
required: false
subscription-id:
description: 'Azure subscriptionId'
required: false
enable-AzPSSession:
description: 'Set this value to true to enable Azure PowerShell Login in addition to Azure CLI login'
required: false
default: false
environment:
description: 'Name of the environment. Supported values are azurecloud, azurestack, azureusgovernment, azurechinacloud, azuregermancloud. Default being azurecloud'
required: false
default: azurecloud
allow-no-subscriptions:
description: 'Set this value to true to enable support for accessing tenants without subscriptions'
required: false
default: false
audience:
description: 'Provide audience field for access-token. Default value is api://AzureADTokenExchange'
required: false
default: 'api://AzureADTokenExchange'
auth-type:
description: 'The type of authentication. Supported values are SERVICE_PRINCIPAL, IDENTITY. Default value is SERVICE_PRINCIPAL'
required: false
default: 'SERVICE_PRINCIPAL'
branding:
icon: 'login.svg'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main/index.js'
post-if: (!env.AZURE_LOGIN_POST_CLEANUP || env.AZURE_LOGIN_POST_CLEANUP != 'false')
post: 'lib/cleanup/index.js'
Action ID: marketplace/peter-evans/s3-backup
Author: Peter Evans
Publisher: peter-evans
Repository: github.com/peter-evans/s3-backup
Mirror a repository to S3 compatible object storage
name: 'S3 Backup'
author: 'Peter Evans'
description: 'Mirror a repository to S3 compatible object storage'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'upload-cloud'
color: 'yellow'
Action ID: marketplace/appleboy/jira-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/jira-action
Jira Issue Integration
| Name | Required | Description |
|---|---|---|
base_url |
Optional | The base URL of the Jira instance. |
insecure |
Optional | Allow insecure SSL connections. |
username |
Optional | Username for Basic authentication. Recommended only for scripts or bots. |
password |
Optional | Password for Basic authentication. Recommended only for scripts or bots. |
token |
Optional | Personal access token (PAT) for authentication. This method uses the user account associated with the token. |
ref |
Optional | The fully-formed reference of the branch or tag that triggered the workflow run. |
issue_pattern |
Optional | Pattern to match strings referencing an alphanumeric issue, e.g., ABC-1234. Default: ([A-Z]{1,10}-[1-9][0-9]*) |
transition |
Optional | Move the issue to a specific status, e.g., Done, In Progress. |
resolution |
Optional | Set the resolution of the issue, e.g., Done, Fixed. |
assignee |
Optional | Assign the issue to a specific user. |
comment |
Optional | The comment to add to the issue. |
markdown |
Optional | convert the markdown format to Jira format. Default: false |
name: "Jira Issue Integration"
description: "Jira Issue Integration"
author: "Bo-Yi Wu"
inputs:
base_url:
description: "The base URL of the Jira instance."
insecure:
description: "Allow insecure SSL connections."
username:
description: "Username for Basic authentication. Recommended only for scripts or bots."
password:
description: "Password for Basic authentication. Recommended only for scripts or bots."
token:
description: "Personal access token (PAT) for authentication. This method uses the user account associated with the token."
ref:
description: "The fully-formed reference of the branch or tag that triggered the workflow run."
issue_pattern:
description: "Pattern to match strings referencing an alphanumeric issue, e.g., ABC-1234."
default: "([A-Z]{1,10}-[1-9][0-9]*)"
transition:
description: "Move the issue to a specific status, e.g., Done, In Progress."
resolution:
description: "Set the resolution of the issue, e.g., Done, Fixed."
assignee:
description: "Assign the issue to a specific user."
comment:
description: "The comment to add to the issue."
markdown:
description: "convert the markdown format to Jira format."
default: "false"
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "cloud"
color: "blue"
Action ID: marketplace/wpengine/github-action-wpe-site-deploy
Author: Unknown
Publisher: wpengine
Repository: github.com/wpengine/github-action-wpe-site-deploy
Deploy WordPress projects to a WP Engine account using SSH Gateway
| Name | Required | Description |
|---|---|---|
WPE_SSHG_KEY_PRIVATE |
Required | The private RSA key you will save in the Github Secrets |
PHP_LINT |
Optional | optional php syntax check |
FLAGS |
Required | Optional flags for the deployment Default: -azvr --inplace --exclude=".*" |
CACHE_CLEAR |
Optional | Optional WPE Clear cache Default: True |
SRC_PATH |
Optional | An optional source directory to deploy other than the root directory that is being versioned. Default: . |
REMOTE_PATH |
Optional | An optional destination directory to deploy to other than the WordPress root. |
WPE_ENV |
Optional | Destination to deploy to WPE |
PRD_ENV |
Optional | Destination to deploy to WPE Prod |
STG_ENV |
Optional | Destination to deploy to WPE Stage |
DEV_ENV |
Optional | Destination to deploy to WPE Dev |
SCRIPT |
Optional | File containing custom scripts run after the rsync |
---
name: "Deploy WordPress to WP Engine"
branding:
icon: "upload-cloud"
color: "blue"
description: "Deploy WordPress projects to a WP Engine account using SSH Gateway"
inputs:
WPE_SSHG_KEY_PRIVATE:
description: "The private RSA key you will save in the Github Secrets"
required: true
PHP_LINT:
description: "optional php syntax check"
required: false
default: false
FLAGS:
description: "Optional flags for the deployment"
required: true
default: '-azvr --inplace --exclude=".*"'
CACHE_CLEAR:
description: "Optional WPE Clear cache"
required: false
default: true
SRC_PATH:
description: "An optional source directory to deploy other than the root directory that is being versioned."
default: "."
required: false
REMOTE_PATH:
description: "An optional destination directory to deploy to other than the WordPress root."
default: ""
required: false
WPE_ENV:
description: "Destination to deploy to WPE"
required: false
PRD_ENV:
description: "Destination to deploy to WPE Prod"
required: false
STG_ENV:
description: "Destination to deploy to WPE Stage"
required: false
DEV_ENV:
description: "Destination to deploy to WPE Dev"
required: false
SCRIPT:
description: "File containing custom scripts run after the rsync"
required: false
runs:
using: "docker"
image: docker://wpengine/site-deploy:1.0.6
env:
WPE_SSHG_KEY_PRIVATE: ${{ inputs.WPE_SSHG_KEY_PRIVATE }}
WPE_ENV: ${{ inputs.WPE_ENV }}
PRD_ENV: ${{ inputs.PRD_ENV }}
STG_ENV: ${{ inputs.STG_ENV }}
DEV_ENV: ${{ inputs.DEV_ENV }}
REMOTE_PATH: ${{ inputs.REMOTE_PATH }}
SRC_PATH: ${{ inputs.SRC_PATH }}
FLAGS: ${{ inputs.FLAGS }}
PHP_LINT: ${{ inputs.PHP_LINT }}
CACHE_CLEAR: ${{ inputs.CACHE_CLEAR }}
SCRIPT: ${{ inputs.SCRIPT }}
Action ID: marketplace/ArtiomTr/jest-coverage-report-action
Author: Artiom Tretjakovas <artiom.tretjakovas2@gmail.com>
Publisher: ArtiomTr
Repository: github.com/ArtiomTr/jest-coverage-report-action
Track your code coverage in each pull request.
| Name | Required | Description |
|---|---|---|
github-token |
Required | A github access token Default: ${{ github.token }} |
test-script |
Optional | A custom npm script to get coverage Default: npx jest |
threshold |
Optional | Coverage threshold. If total coverage is less than threshold, PR will be rejected |
working-directory |
Optional | Custom working directory |
icons |
Optional | Which icons to use. Available choices: emoji, ascii Default: emoji |
annotations |
Optional | What type of annotations show. Options: none, coverage, failed-tests, all Default: all |
package-manager |
Optional | Which package manager to use; can be `npm`, `yarn`, `pnpm`, or `bun` Default: npm |
skip-step |
Optional | `none` for running all steps, `install` to skip installing dependencies or `all` to skip installing dependencies and running the test script Default: none |
custom-title |
Optional | Sets the title of the coverage report comment in Github. |
coverage-file |
Optional | Path to file the a previously generated report.json file. (bypasses running the test again) |
base-coverage-file |
Optional | Path to the report.json file to compare against. This should be from the branch the PR is merging into. |
prnumber |
Optional | Pull Request number for this commit. Use if your action is running in a PR, but non on the pull_request event, so that you do not get multiple comments. |
output |
Optional | What output should action produce? `comment` - add comment to PR, `report-markdown` - report markdown text in "outputs.report". Default: comment |
| Name | Description |
|---|---|
report |
Generated markdown report. Exists only if input option "output" is set to "report-markdown". |
name: 'Jest coverage report'
description: 'Track your code coverage in each pull request.'
author: 'Artiom Tretjakovas <artiom.tretjakovas2@gmail.com>'
branding:
icon: 'activity'
color: 'gray-dark'
inputs:
github-token:
required: true
description: 'A github access token'
default: ${{ github.token }}
test-script:
required: false
description: 'A custom npm script to get coverage'
default: npx jest
threshold:
required: false
description: 'Coverage threshold. If total coverage is less than threshold, PR will be rejected'
working-directory:
required: false
description: 'Custom working directory'
icons:
required: false
description: 'Which icons to use. Available choices: emoji, ascii'
default: emoji
annotations:
required: false
description: 'What type of annotations show. Options: none, coverage, failed-tests, all'
default: all
package-manager:
required: false
description: 'Which package manager to use; can be `npm`, `yarn`, `pnpm`, or `bun`'
default: 'npm'
skip-step:
required: false
description: '`none` for running all steps, `install` to skip installing dependencies or `all` to skip installing dependencies and running the test script'
default: 'none'
custom-title:
required: false
description: 'Sets the title of the coverage report comment in Github.'
coverage-file:
required: false
description: Path to file the a previously generated report.json file. (bypasses running the test again)
base-coverage-file:
required: false
description: Path to the report.json file to compare against. This should be from the branch the PR is merging into.
prnumber:
required: false
description: Pull Request number for this commit. Use if your action is running in a PR, but non on the pull_request event, so that you do not get multiple comments.
output:
required: false
description: What output should action produce? `comment` - add comment to PR, `report-markdown` - report markdown text in "outputs.report".
default: comment
outputs:
report:
description: Generated markdown report. Exists only if input option "output" is set to "report-markdown".
value: ${{ steps.main.outputs.report }}
runs:
using: composite
steps:
- run: $GITHUB_ACTION_PATH/run.sh
shell: bash
id: main
env:
INPUT_GITHUB-TOKEN: ${{ inputs.github-token }}
INPUT_TEST-SCRIPT: ${{ inputs.test-script }}
INPUT_THRESHOLD: ${{ inputs.threshold }}
INPUT_WORKING-DIRECTORY: ${{ inputs.working-directory }}
INPUT_ICONS: ${{ inputs.icons }}
INPUT_ANNOTATIONS: ${{ inputs.annotations }}
INPUT_PACKAGE-MANAGER: ${{ inputs.package-manager }}
INPUT_SKIP-STEP: ${{ inputs.skip-step }}
INPUT_CUSTOM-TITLE: ${{ inputs.custom-title }}
INPUT_COVERAGE-FILE: ${{ inputs.coverage-file }}
INPUT_BASE-COVERAGE-FILE: ${{ inputs.base-coverage-file }}
INPUT_PRNUMBER: ${{ inputs.prnumber }}
INPUT_OUTPUT: ${{ inputs.output }}
Action ID: marketplace/Ilshidur/action-slack
Author: Ilshidur
Publisher: Ilshidur
Repository: github.com/Ilshidur/action-slack
Outputs a message to Slack.
| Name | Required | Description |
|---|---|---|
args |
Required | The message to display in the Slack notification. |
name: 'GitHub Action for Slack'
description: 'Outputs a message to Slack.'
author: Ilshidur
inputs:
args:
description: 'The message to display in the Slack notification.'
required: true
# TODO: Handle configuration through Action inputs.
# slack_webhook:
# slack_avatar:
# slack_custom_payload:
# slack_username:
# slack_channel:
runs:
using: node12
main: dist/index.js
branding:
icon: hash
color: red
Action ID: marketplace/tibdex/backport
Author: Thibault Derousseaux <tibdex@gmail.com>
Publisher: tibdex
Repository: github.com/tibdex/backport
Automatically backport PRs to other branches by simply labeling them.
| Name | Required | Description |
|---|---|---|
body_template |
Optional | Lodash template for the backport PR's body.
The data properties are:
- base: backport PR's base branch
- body: original PR's body
- mergeCommitSha: SHA of the original PR's merge commit
- number: original PR's number
Default: Backport <%= mergeCommitSha %> from #<%= number %>. |
github_token |
Required | Token for the GitHub API. |
head_template |
Optional | Lodash template for the backport PR's head branch.
The data properties are:
- base: backport PR's base branch
- number: original PR's number
Default: backport-<%= number %>-to-<%= base %> |
label_pattern |
Optional | The regular expression pattern that PR labels will be tested on to decide whether the PR should be backported and where. The backport PR's base branch will be extracted from the pattern's required `base` named capturing group.
Default: ^backport (?<base>([^ ]+))$ |
labels_template |
Optional | Lodash template compiling to a JSON array of labels to add to the backport PR.
The data properties are:
- base: backport PR's base branch
- labels: array containing the original PR's labels, excluding those matching `label_pattern`.
Default: [] |
title_template |
Optional | Lodash template for the backport PR's title.
The data properties are:
- base: backport PR's base branch
- number: original PR's number
- title: original PR's title
Default: [Backport <%= base %>] <%= title %> |
| Name | Description |
|---|---|
created_pull_requests |
A JSON stringified object mapping the base branch of the created pull requests to their number. |
name: Backporting
author: Thibault Derousseaux <tibdex@gmail.com>
description: >
Automatically backport PRs to other branches by simply labeling them.
inputs:
body_template:
description: >
Lodash template for the backport PR's body.
The data properties are:
- base: backport PR's base branch
- body: original PR's body
- mergeCommitSha: SHA of the original PR's merge commit
- number: original PR's number
default: "Backport <%= mergeCommitSha %> from #<%= number %>."
github_token:
description: Token for the GitHub API.
required: true
head_template:
description: >
Lodash template for the backport PR's head branch.
The data properties are:
- base: backport PR's base branch
- number: original PR's number
default: "backport-<%= number %>-to-<%= base %>"
label_pattern:
description: >
The regular expression pattern that PR labels will be tested on to decide whether the PR should be backported and where.
The backport PR's base branch will be extracted from the pattern's required `base` named capturing group.
default: "^backport (?<base>([^ ]+))$"
labels_template:
description: >
Lodash template compiling to a JSON array of labels to add to the backport PR.
The data properties are:
- base: backport PR's base branch
- labels: array containing the original PR's labels, excluding those matching `label_pattern`.
default: "[]"
title_template:
description: >
Lodash template for the backport PR's title.
The data properties are:
- base: backport PR's base branch
- number: original PR's number
- title: original PR's title
default: "[Backport <%= base %>] <%= title %>"
outputs:
created_pull_requests:
description: A JSON stringified object mapping the base branch of the created pull requests to their number.
runs:
using: node16
main: dist/index.js
branding:
icon: arrow-left-circle
color: purple
Action ID: marketplace/pypa/gh-action-pypi-publish
Author: Unknown
Publisher: pypa
Repository: github.com/pypa/gh-action-pypi-publish
Upload Python distribution packages to PyPI
| Name | Required | Description |
|---|---|---|
user |
Optional | PyPI user Default: __token__ |
password |
Optional | Password for your PyPI user or an access token |
repository-url |
Optional | The repository URL to use |
repository_url |
Optional | [DEPRECATED] The repository URL to use Default: https://upload.pypi.org/legacy/ |
packages-dir |
Optional | The target directory for distribution |
packages_dir |
Optional | [DEPRECATED] The target directory for distribution Default: dist |
verify-metadata |
Optional | Check metadata before uploading |
verify_metadata |
Optional | [DEPRECATED] Check metadata before uploading Default: true |
skip-existing |
Optional | Do not fail if a Python package distribution exists in the target package index |
skip_existing |
Optional | [DEPRECATED] Do not fail if a Python package distribution exists in the target package index Default: false |
verbose |
Optional | Show verbose output. Default: false |
print-hash |
Optional | Show hash values of files to be uploaded |
print_hash |
Optional | [DEPRECATED] Show hash values of files to be uploaded Default: false |
attestations |
Optional | Enable experimental support for PEP 740 attestations. Only works with PyPI and TestPyPI via Trusted Publishing. Default: true |
---
name: pypi-publish
description: Upload Python distribution packages to PyPI
inputs:
user:
description: PyPI user
required: false
default: __token__
password:
description: Password for your PyPI user or an access token
required: false
repository-url: # Canonical alias for `repository_url`
description: The repository URL to use
required: false
repository_url: # DEPRECATED ALIAS; TODO: Remove in v3+
description: >-
[DEPRECATED]
The repository URL to use
deprecationMessage: >-
The inputs have been normalized to use kebab-case.
Use `repository-url` instead.
required: false
default: https://upload.pypi.org/legacy/
packages-dir: # Canonical alias for `packages_dir`
description: The target directory for distribution
required: false
# default: dist # TODO: uncomment once alias removed
packages_dir: # DEPRECATED ALIAS; TODO: Remove in v3+
description: >-
[DEPRECATED]
The target directory for distribution
deprecationMessage: >-
The inputs have been normalized to use kebab-case.
Use `packages-dir` instead.
required: false
default: dist
verify-metadata: # Canonical alias for `verify_metadata`
description: Check metadata before uploading
required: false
# default: 'true' # TODO: uncomment once alias removed
verify_metadata: # DEPRECATED ALIAS; TODO: Remove in v3+
description: >-
[DEPRECATED]
Check metadata before uploading
deprecationMessage: >-
The inputs have been normalized to use kebab-case.
Use `verify-metadata` instead.
required: false
default: 'true'
skip-existing: # Canonical alias for `skip_existing`
description: >-
Do not fail if a Python package distribution
exists in the target package index
required: false
# default: 'false' # TODO: uncomment once alias removed
skip_existing: # DEPRECATED ALIAS; TODO: Remove in v3+
description: >-
[DEPRECATED]
Do not fail if a Python package distribution
exists in the target package index
deprecationMessage: >-
The inputs have been normalized to use kebab-case.
Use `skip-existing` instead.
required: false
default: 'false'
verbose:
description: Show verbose output.
required: false
default: 'false'
print-hash: # Canonical alias for `print_hash`
description: Show hash values of files to be uploaded
required: false
# default: 'false' # TODO: uncomment once alias removed
print_hash: # DEPRECATED ALIAS; TODO: Remove in v3+
description: >-
[DEPRECATED]
Show hash values of files to be uploaded
deprecationMessage: >-
The inputs have been normalized to use kebab-case.
Use `print-hash` instead.
required: false
default: 'false'
attestations:
description: >-
Enable experimental support for PEP 740 attestations.
Only works with PyPI and TestPyPI via Trusted Publishing.
required: false
default: 'true'
branding:
color: yellow
icon: upload-cloud
runs:
using: composite
steps:
- name: Fail-fast in unsupported environments
if: runner.os != 'Linux'
run: |
>&2 echo This action is only able to run under GNU/Linux environments
exit 1
shell: bash -eEuo pipefail {0}
- name: Reset path if needed
run: | # zizmor: ignore[github-env] PATH is not user-controlled
# Reset path if needed
# https://github.com/pypa/gh-action-pypi-publish/issues/112
if [[ $PATH != *"/usr/bin"* ]]; then
echo "\$PATH=$PATH. Resetting \$PATH for GitHub Actions."
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
echo "PATH=$PATH" >>"$GITHUB_ENV"
echo "$PATH" >>"$GITHUB_PATH"
echo "\$PATH reset. \$PATH=$PATH"
fi
shell: bash
- name: Discover pre-installed Python
id: pre-installed-python
run: |
# 🔎 Discover pre-installed Python
echo "python-path=$(command -v python3 || :)" | tee -a "${GITHUB_OUTPUT}"
shell: bash
- name: Install Python 3
if: steps.pre-installed-python.outputs.python-path == ''
id: new-python
# yamllint disable-line rule:line-length
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- name: Create Docker container action
run: |
# Create Docker container action
${{
steps.pre-installed-python.outputs.python-path == ''
&& steps.new-python.outputs.python-path
|| steps.pre-installed-python.outputs.python-path
}} '${{ github.action_path }}/create-docker-action.py'
env:
# Set repo and ref from which to run Docker container action
# to handle cases in which `github.action_` context is not set
# https://github.com/actions/runner/issues/2473
REF: >-
${{
github.action_ref
|| github.event.pull_request.head.ref
|| github.ref_name
}}
REPO: >-
${{
github.action_repository
|| github.event.pull_request.head.repo.full_name
|| github.repository
}}
REPO_ID: >-
${{
github.event.pull_request.base.repo.id
|| github.repository_id
}}
shell: bash
- name: Run Docker container
# The generated trampoline action must exist in the allowlisted
# runner-defined working directory so it can be referenced by the
# relative path starting with `./`.
#
# This mutates the end-user's workspace slightly but uses a path
# that is unlikely to clash with somebody else's use.
#
# We cannot use randomized paths because the composite action
# syntax does not allow accessing variables in `uses:`. This
# means that we end up having to hardcode this path both here and
# in `create-docker-action.py`.
uses: ./.github/.tmp/.generated-actions/run-pypi-publish-in-docker-container
with:
user: ${{ inputs.user }}
password: ${{ inputs.password }}
repository-url: ${{ inputs.repository-url || inputs.repository_url }}
packages-dir: ${{ inputs.packages-dir || inputs.packages_dir }}
verify-metadata: ${{ inputs.verify-metadata || inputs.verify_metadata }}
skip-existing: ${{ inputs.skip-existing || inputs.skip_existing }}
verbose: ${{ inputs.verbose }}
print-hash: ${{ inputs.print-hash || inputs.print_hash }}
attestations: ${{ inputs.attestations }}
Action ID: marketplace/amirisback/android-share-image-to-other-apps
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-share-image-to-other-apps
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/android-text-spannable
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-text-spannable
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/naughty-compose
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/naughty-compose
Learning Jetpack Compose
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'naughty-compose'
description: 'Learning Jetpack Compose'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/rhysd/action-setup-vim
Author: rhysd <https://rhysd.github.io>
Publisher: rhysd
Repository: github.com/rhysd/action-setup-vim
Setup Vim or Neovim text editors on GitHub Actions
| Name | Required | Description |
|---|---|---|
version |
Optional | Version of Vim or Neovim to install. Valid values are 'stable', 'nightly' or version tag such as 'v8.2.0126'. Note that this value must exactly match to a tag name when installing the specific version.
Default: stable |
neovim |
Optional | Setting to true will install Neovim. |
configure-args |
Optional | Arguments passed to ./configure execution when building Vim from source. |
token |
Optional | Personal access token for GitHub API. It is used for calling GitHub API when Neovim asset is not found in stable releases and needs to fallback. You don't need to set this input since it is set automatically.
Default: ${{ github.token }} |
| Name | Description |
|---|---|
executable |
Absolute file path to the installed executable. |
vim-dir |
Absolute file path to the $VIM directory of the installation. Please see `:help $VIM` for more details. |
name: 'Setup Vim'
author: 'rhysd <https://rhysd.github.io>'
description: 'Setup Vim or Neovim text editors on GitHub Actions'
branding:
icon: 'edit'
color: 'green'
inputs:
version:
description: >
Version of Vim or Neovim to install. Valid values are 'stable', 'nightly' or version tag such
as 'v8.2.0126'. Note that this value must exactly match to a tag name when installing the
specific version.
required: false
default: 'stable'
neovim:
description: >
Setting to true will install Neovim.
required: false
default: false
configure-args:
description: >
Arguments passed to ./configure execution when building Vim from source.
required: false
token:
description: >
Personal access token for GitHub API. It is used for calling GitHub API when Neovim asset is
not found in stable releases and needs to fallback. You don't need to set this input since it
is set automatically.
default: ${{ github.token }}
outputs:
executable:
description: >
Absolute file path to the installed executable.
vim-dir:
description: >
Absolute file path to the $VIM directory of the installation. Please see `:help $VIM` for
more details.
runs:
using: 'node24'
main: 'src/index.js'
Action ID: marketplace/mheap/setup-inso
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/setup-inso
Install inso from Kong
| Name | Required | Description |
|---|---|---|
inso-version |
Required | The version of inso to install |
compression |
Optional | The compression algorithm to use. One of: [gzip, bzip] |
wrapper |
Optional | Add a wrapper script to make stdout, stderr and errorcode available as outputs Default: false |
name: Setup inso
description: Install inso from Kong
runs:
using: node20
main: dist/index.js
inputs:
inso-version:
description: The version of inso to install
required: true
compression:
description: "The compression algorithm to use. One of: [gzip, bzip]"
required: false
wrapper:
description: >-
Add a wrapper script to make stdout, stderr and errorcode available as
outputs
default: 'false'
required: false
Action ID: marketplace/azure/functions-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/functions-action
Deploy Function App to Azure Functions
| Name | Required | Description |
|---|---|---|
app-name |
Required | Name of the Azure Function App |
package |
Optional | Path to package or folder. *.zip or a folder to deploy Default: . |
slot-name |
Optional | Function app slot to be deploy to |
publish-profile |
Optional | Publish profile (*.publishsettings) file contents with web deploy secrets |
respect-pom-xml |
Optional | Automatically look up Java function app artifact from pom.xml (default: 'false'). When this is set to 'true', 'package' should point to the folder of host.json. Default: false |
respect-funcignore |
Optional | Remove unwanted files defined in .funcignore file (default: 'false'). When this is set to 'true', 'package' should point to the folder of host.json. Default: false |
scm-do-build-during-deployment |
Optional | Enable build action from Kudu when the package is deployed onto the function app. This will temporarily change the SCM_DO_BUILD_DURING_DEPLOYMENT setting for this deployment. To bypass this and use the existing settings from your function app, please set this to an empty string ''. To enable remote build for your project, please set this and 'enable-oryx-build' both to 'true'. By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the redundant build action from Kudu endpoint. (default: 'false'). Default: false |
enable-oryx-build |
Optional | Use Oryx Build from Kudu when the package is deployed onto the function app. (Linux functions only). This will temporarily change the ENABLE_ORYX_BUILD setting from this deployment. To bypass this and use the existing settings from your function app, please set this to an empty string ''. To enable remote build for your project, please set this and 'scm-do-build-during-deployment' both to 'true'. By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the redundant build action from Kudu endpoint. (default: 'false'). Default: false |
sku |
Optional | For function app on Flex Consumption plan, set this to 'flexconsumption'. You can skip this parameter for function app on other plans. If using RBAC credentials, then by default, GitHub Action will resolve the value for this paramter. But if using 'publish-profile', then you must set this for function app on Flex Consumption plan. |
remote-build |
Optional | For function app on Flex Consumption plan, enable build action from Kudu when the package is deployed onto the function app by setting this to 'true'. For function app on Flex Consumption plan, do not set 'scm-do-build-during-deployment' and 'enable-oryx-build'. By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the redundant build action from Kudu endpoint. (default: 'false'). Default: false |
| Name | Description |
|---|---|
app-url |
URL to work with your function app |
package-url |
URL to the package zip file if using package deployment |
# Azure Functions GitHub Action
name: 'Azure Functions Action'
description: 'Deploy Function App to Azure Functions'
inputs:
app-name:
description: 'Name of the Azure Function App'
required: true
package:
description: 'Path to package or folder. *.zip or a folder to deploy'
required: false
default: '.'
slot-name:
description: 'Function app slot to be deploy to'
required: false
publish-profile:
description: 'Publish profile (*.publishsettings) file contents with web deploy secrets'
required: false
respect-pom-xml:
description: "Automatically look up Java function app artifact from pom.xml (default: 'false').
When this is set to 'true', 'package' should point to the folder of host.json."
required: false
default: 'false'
respect-funcignore:
description: "Remove unwanted files defined in .funcignore file (default: 'false').
When this is set to 'true', 'package' should point to the folder of host.json."
required: false
default: 'false'
scm-do-build-during-deployment:
description: "Enable build action from Kudu when the package is deployed onto the function app.
This will temporarily change the SCM_DO_BUILD_DURING_DEPLOYMENT setting for this deployment.
To bypass this and use the existing settings from your function app, please set this to an empty
string ''.
To enable remote build for your project, please set this and 'enable-oryx-build' both to 'true'.
By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the
redundant build action from Kudu endpoint. (default: 'false')."
required: false
default: 'false'
enable-oryx-build:
description: "Use Oryx Build from Kudu when the package is deployed onto the function app. (Linux functions only).
This will temporarily change the ENABLE_ORYX_BUILD setting from this deployment.
To bypass this and use the existing settings from your function app, please set this to an empty
string ''.
To enable remote build for your project, please set this and 'scm-do-build-during-deployment' both
to 'true'.
By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the
redundant build action from Kudu endpoint. (default: 'false')."
required: false
default: 'false'
sku:
description: "For function app on Flex Consumption plan, set this to 'flexconsumption'. You can skip this parameter for function app on other plans.
If using RBAC credentials, then by default, GitHub Action will resolve the value for this paramter. But if using 'publish-profile',
then you must set this for function app on Flex Consumption plan."
required: false
remote-build:
description: "For function app on Flex Consumption plan, enable build action from Kudu when the package is deployed onto the function app by setting this to 'true'.
For function app on Flex Consumption plan, do not set 'scm-do-build-during-deployment' and 'enable-oryx-build'.
By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the
redundant build action from Kudu endpoint. (default: 'false')."
required: false
default: 'false'
outputs:
app-url:
description: 'URL to work with your function app'
package-url:
description: 'URL to the package zip file if using package deployment'
branding:
icon: 'functionapp.svg'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/Oval-Tutu/publish-to-itch-with-butler
Author: Oval Tutu
Publisher: Oval-Tutu
Repository: github.com/Oval-Tutu/publish-to-itch-with-butler
Automatically sets up Butler and provides a simple GitHub Action to publish releases to Itch.io
| Name | Required | Description |
|---|---|---|
api-key |
Required | Butler API key |
channel |
Required | itch.io channel name. For example: android, html, linux, osx or windows |
itch_user |
Required | itch.io user name |
itch_game |
Required | itch.io game name |
package |
Required | The directory or file to upload |
version |
Optional | game version |
name: 'Publish to Itch.io with Butler'
description: 'Automatically sets up Butler and provides a simple GitHub Action to publish releases to Itch.io'
author: 'Oval Tutu'
branding:
color: red
icon: send
inputs:
api-key:
description: 'Butler API key'
required: true
channel:
description: 'itch.io channel name. For example: android, html, linux, osx or windows'
required: true
itch_user:
description: 'itch.io user name'
required: true
itch_game:
description: 'itch.io game name'
required: true
package:
description: 'The directory or file to upload'
required: true
version:
description: 'game version'
required: false
default: ''
runs:
using: composite
steps:
- name: Download butler
env:
BUTLER_API_KEY: ${{ inputs.api-key }}
shell: bash
run: |
# Set butler platform based on runner OS
BUTLER_PLATFORM="linux-amd64"
BUTLER_EXEC="butler"
if [ "${{ runner.os }}" = "Windows" ]; then
BUTLER_PLATFORM="windows-amd64"
BUTLER_EXEC="butler.exe"
elif [ "${{ runner.os }}" = "macOS" ]; then
BUTLER_PLATFORM="darwin-amd64"
else
echo "Unsupported OS: ${{ runner.os }}"
exit 1
fi
mkdir ./tools 2>/dev/null || true
pushd tools
curl -sSLfo ./butler.zip "https://broth.itch.ovh/butler/${BUTLER_PLATFORM}/LATEST/archive/default"
unzip butler.zip
chmod +x ./${BUTLER_EXEC}
popd
./tools/${BUTLER_EXEC} -V
- name: Upload to itch.io
env:
BUTLER_API_KEY: ${{ inputs.api-key }}
shell: bash
run: |
# Set butler executable name based on runner OS
BUTLER_EXEC="butler"
if [ "${{ runner.os }}" = "Windows" ]; then
BUTLER_EXEC="butler.exe"
fi
versionArgument=""
if [ -n "${{ inputs.version }}" ]; then
versionArgument="--userversion ${{ inputs.version }}"
fi
./tools/${BUTLER_EXEC} push \
"${{ inputs.package }}" \
${{ inputs.itch_user }}/${{ inputs.itch_game }}:${{ inputs.channel }} ${versionArgument}
Action ID: marketplace/dflook/terraform-new-workspace
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-new-workspace
Creates a new Terraform workspace. If the workspace already exists, succeeds without doing anything.
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform root module directory. Default: . |
workspace |
Required | The name of the Terraform workspace to create. |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
name: terraform-new-workspace
description: Creates a new Terraform workspace. If the workspace already exists, succeeds without doing anything.
author: Daniel Flook
inputs:
path:
description: The path to the Terraform root module directory.
required: false
default: "."
workspace:
description: The name of the Terraform workspace to create.
required: true
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/new-workspace.sh
branding:
icon: globe
color: purple
Action ID: marketplace/google-github-actions/send-google-chat-webhook
Author: Unknown
Publisher: google-github-actions
Repository: github.com/google-github-actions/send-google-chat-webhook
Send message to your google chat workspace
| Name | Required | Description |
|---|---|---|
webhook_url |
Required | Chat space webhook url |
mention |
Optional | Mention people or not, format <users/user_id> Default: <users/all> |
# Copyright 2023 The Authors (see AUTHORS file)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'send-google-chat-webhhook'
description: 'Send message to your google chat workspace'
inputs:
webhook_url:
description: |-
Chat space webhook url
required: true
mention:
description: |-
Mention people or not, format <users/user_id>
default: '<users/all>'
required: false
runs:
using: 'composite'
steps:
- name: 'download binary'
shell: 'bash'
env:
BINARY_NAME: 'send-google-chat-webhook'
# manully update VERSION after each release
# VERSION should not contain v.
VERSION: '0.0.4'
run: |-
case "${RUNNER_OS}" in
"Linux")
CURL_OS="linux"
;;
"macOS")
CURL_OS="darwin"
;;
"Windows")
CURL_OS="windows"
esac
case "${RUNNER_ARCH}" in
"X64")
CURL_ARCH="amd64"
;;
"arm64")
CURL_ARCH="arm64"
;;
esac
curl -LOv "https://github.com/google-github-actions/send-google-chat-webhook/releases/download/v${{ env.VERSION }}/send-google-chat-webhook_${{ env.VERSION }}_${CURL_OS}_${CURL_ARCH}.tar.gz"
tar xzf ${{ env.BINARY_NAME }}_${{ env.VERSION }}_${CURL_OS}_${CURL_ARCH}.tar.gz
- name: 'send message via cli'
shell: 'bash'
env:
GITHUB_CONTEXT: '${{ toJson(github) }}'
JOB_CONTEXT: '${{ toJson(job) }}'
STEPS_CONTEXT: '${{ toJson(steps) }}'
RUNNER_CONTEXT: '${{ toJson(runner) }}'
STRATEGY_CONTEXT: '${{ toJson(strategy) }}'
MATRIX_CONTEXT: '${{ toJson(matrix) }}'
WEBHOOK_URL: '${{ inputs.webhook_url }}'
run: |-
./send-google-chat-webhook chat workflownotification --webhook-url="${WEBHOOK_URL}"
Action ID: marketplace/mxschmitt/action-tmate
Author: Max Schmitt
Publisher: mxschmitt
Repository: github.com/mxschmitt/action-tmate
Debug your GitHub Actions Environment interactively by using SSH or a Web shell
| Name | Required | Description |
|---|---|---|
sudo |
Optional | If apt should be executed with sudo or without Default: auto |
install-dependencies |
Optional | Whether or not to install dependencies for tmate on linux (openssh-client, xz-utils) Default: true |
limit-access-to-actor |
Optional | Whether to authorize only the public SSH keys of the user triggering the workflow (defaults to true if the GitHub profile of the user has a public SSH key) Default: auto |
detached |
Optional | In detached mode, the workflow job will continue while the tmate session is active Default: false |
connect-timeout-seconds |
Optional | How long in seconds to wait for a connection to be established Default: 600 |
tmate-server-host |
Optional | The hostname for your tmate server (e.g. ssh.example.org) |
tmate-server-port |
Optional | The port for your tmate server (e.g. 2222) |
tmate-server-rsa-fingerprint |
Optional | The RSA fingerprint for your tmate server |
tmate-server-ed25519-fingerprint |
Optional | The ed25519 fingerprint for your tmate server |
msys2-location |
Optional | The root of the MSYS2 installation (on Windows runners) Default: C:\msys64 |
github-token |
Optional | Personal access token (PAT) used to call into GitHub's REST API. We recommend using a service account with the least permissions necessary. Also when generating a new PAT, select the least scopes necessary. [Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
Default: ${{ github.token }} |
| Name | Description |
|---|---|
ssh-command |
The SSH command to connect to the tmate session (only set when detached mode is enabled) |
ssh-address |
The raw SSH address without the "ssh" prefix (only set when detached mode is enabled) |
web-url |
The web URL to connect to the tmate session (only set when detached mode is enabled and web URL is available) |
name: 'Debugging with tmate'
description: 'Debug your GitHub Actions Environment interactively by using SSH or a Web shell'
branding:
icon: terminal
author: 'Max Schmitt'
runs:
using: 'node20'
main: 'lib/index.js'
post: 'lib/index.js'
post-if: '!cancelled()'
inputs:
sudo:
description: 'If apt should be executed with sudo or without'
required: false
default: 'auto'
install-dependencies:
description: 'Whether or not to install dependencies for tmate on linux (openssh-client, xz-utils)'
required: false
default: 'true'
limit-access-to-actor:
description: 'Whether to authorize only the public SSH keys of the user triggering the workflow (defaults to true if the GitHub profile of the user has a public SSH key)'
required: false
default: 'auto'
detached:
description: 'In detached mode, the workflow job will continue while the tmate session is active'
required: false
default: 'false'
connect-timeout-seconds:
description: 'How long in seconds to wait for a connection to be established'
required: false
default: '600'
tmate-server-host:
description: 'The hostname for your tmate server (e.g. ssh.example.org)'
required: false
default: ''
tmate-server-port:
description: 'The port for your tmate server (e.g. 2222)'
required: false
default: ''
tmate-server-rsa-fingerprint:
description: 'The RSA fingerprint for your tmate server'
required: false
default: ''
tmate-server-ed25519-fingerprint:
description: 'The ed25519 fingerprint for your tmate server'
required: false
default: ''
msys2-location:
description: 'The root of the MSYS2 installation (on Windows runners)'
required: false
default: 'C:\msys64'
github-token:
description: >
Personal access token (PAT) used to call into GitHub's REST API.
We recommend using a service account with the least permissions necessary.
Also when generating a new PAT, select the least scopes necessary.
[Learn more about creating and using encrypted secrets](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
default: ${{ github.token }}
outputs:
ssh-command:
description: 'The SSH command to connect to the tmate session (only set when detached mode is enabled)'
ssh-address:
description: 'The raw SSH address without the "ssh" prefix (only set when detached mode is enabled)'
web-url:
description: 'The web URL to connect to the tmate session (only set when detached mode is enabled and web URL is available)'
Action ID: marketplace/appleboy/jenkins-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/jenkins-action
Triggering Jenkins Job through the API
| Name | Required | Description |
|---|---|---|
url |
Required | Jenkins base URL (e.g., http://jenkins.example.com/) |
user |
Optional | Jenkins username |
token |
Optional | Jenkins API token |
remote_token |
Optional | Jenkins remote trigger token |
job |
Required | Jenkins job name(s) - can specify multiple |
parameters |
Optional | Build parameters in multi-line key=value format (one per line) |
insecure |
Optional | Allow insecure SSL connections (default: false) Default: false |
wait |
Optional | Wait for job completion (default: false) Default: false |
poll_interval |
Optional | Interval between status checks (default: 10s) Default: 10s |
timeout |
Optional | Maximum time to wait for job completion (default: 30m) Default: 30m |
debug |
Optional | Enable debug mode to show detailed parameter information (default: false) Default: false |
ca_cert |
Optional | Custom CA certificate (PEM content, file path, or HTTP URL) |
name: "Trigger Jenkins Multiple Jobs"
description: "Triggering Jenkins Job through the API"
author: "Bo-Yi Wu"
inputs:
url:
description: "Jenkins base URL (e.g., http://jenkins.example.com/)"
required: true
user:
description: "Jenkins username"
token:
description: "Jenkins API token"
remote_token:
description: "Jenkins remote trigger token"
job:
description: "Jenkins job name(s) - can specify multiple"
required: true
parameters:
description: "Build parameters in multi-line key=value format (one per line)"
insecure:
description: "Allow insecure SSL connections (default: false)"
default: "false"
wait:
description: "Wait for job completion (default: false)"
default: "false"
poll_interval:
description: "Interval between status checks (default: 10s)"
default: "10s"
timeout:
description: "Maximum time to wait for job completion (default: 30m)"
default: "30m"
debug:
description: "Enable debug mode to show detailed parameter information (default: false)"
default: "false"
ca_cert:
description: "Custom CA certificate (PEM content, file path, or HTTP URL)"
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "check-circle"
color: "green"
Action ID: marketplace/vn7n24fzkq/govulncheck-action
Author: Unknown
Publisher: vn7n24fzkq
Repository: github.com/vn7n24fzkq/govulncheck-action
Run govulncheck
| Name | Required | Description |
|---|---|---|
go-version-input |
Optional | Version of Go to use for govulncheck Default: stable |
check-latest |
Optional | Set this option to true if you want the action to always check for the latest available Go version that satisfies the version spec Default: True |
cache |
Optional | Used to specify whether Go caching is needed. Set to true, if you would like to enable caching. Default: True |
go-package |
Optional | Go Package to scan with govulncheck Default: ./... |
work-dir |
Optional | Directory in which to run govulncheck Default: . |
repo-checkout |
Optional | Checkout the repository Default: True |
go-version-file |
Optional | Path to the go.mod or go.work file. |
output-format |
Optional | The format of the output Default: text |
output-file |
Optional | The file to which the govulncheck output is saved |
name: 'golang-govulncheck-action'
description: 'Run govulncheck'
inputs:
go-version-input: # version of Go to use for govulncheck
description: 'Version of Go to use for govulncheck'
required: false
default: 'stable'
check-latest:
description: 'Set this option to true if you want the action to always check for the latest available Go version that satisfies the version spec'
required: false
default: true
cache:
description: 'Used to specify whether Go caching is needed. Set to true, if you would like to enable caching.'
required: false
default: true
go-package:
description: 'Go Package to scan with govulncheck'
required: false
default: './...'
work-dir:
description: 'Directory in which to run govulncheck'
required: false
default: '.'
repo-checkout:
description: "Checkout the repository"
required: false
default: true
go-version-file:
description: 'Path to the go.mod or go.work file.'
required: false
output-format:
description: 'The format of the output'
required: false
default: 'text'
output-file:
description: 'The file to which the govulncheck output is saved'
required: false
default: ''
runs:
using: "composite"
steps:
- if: inputs.repo-checkout != 'false' # only explicit false prevents repo checkout
uses: actions/checkout@v4.1.1
- uses: actions/setup-go@v5.0.0
with:
go-version: ${{ inputs.go-version-input }}
check-latest: ${{ inputs.check-latest }}
go-version-file: ${{ inputs.go-version-file }}
cache: ${{ inputs.cache }}
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
shell: bash
- if: inputs.output-file == ''
name: Run govulncheck
run: govulncheck -C ${{ inputs.work-dir }} -format ${{ inputs.output-format }} ${{ inputs.go-package }}
shell: bash
- if: inputs.output-file != ''
name: Run govulncheck and save to file
run: govulncheck -C ${{ inputs.work-dir }} -format ${{ inputs.output-format }} ${{ inputs.go-package }} > ${{ inputs.output-file }}
shell: bash
Action ID: marketplace/mszostok/github-action-for-cli
Author: Unknown
Publisher: mszostok
Repository: github.com/mszostok/github-action-for-cli
Use this action to generate docs or code from your AsyncAPI document. Use default templates or provide your custom ones.
| Name | Required | Description |
|---|---|---|
cli_version |
Optional | Version of AsyncAPI CLI to be used. This is only needed if you want to test with a specific version of AsyncAPI CLI. Default is latest which is also the recommended option. |
command |
Optional | Command to run. Available commands in action :- generate, validate, convert, optimize and custom. Default is generate. For custom command, provide the whole command as input. List of available commands can be found in https://www.asyncapi.com/docs/tools/cli/usage. Default: generate |
filepath |
Optional | Path to AsyncAPI document. This input is required if command is set to generate, validate, convert or optimize. Default is ./asyncapi.yaml Default: asyncapi.yml |
template |
Optional | Template for the generator. Official templates are listed here https://github.com/search?q=topic%3Aasyncapi+topic%3Agenerator+topic%3Atemplate. You can pass template as npm package, url to git repository, link to tar file or local template. Default: @asyncapi/markdown-template@0.10.0 |
language |
Optional | Language of the generated code. This input is required if you want to generate models. List of available languages can be found in https://www.asyncapi.com/docs/tools/cli/usage#asyncapi-generate-models-language-file |
output |
Optional | Directory where to put the generated files. Can be used only with generate or convert commands. Default is output. Default: output |
parameters |
Optional | The command that you use might support and even require specific parameters to be passed to the CLI for the generation. Template parameters should be preceded by -p |
custom_command |
Optional | Custom command to be run. This input is required if command is set to custom. |
name: 'Generator, Validator, Converter and others - all in one for your AsyncAPI docs'
description: 'Use this action to generate docs or code from your AsyncAPI document. Use default templates or provide your custom ones.'
inputs:
cli_version:
description: 'Version of AsyncAPI CLI to be used. This is only needed if you want to test with a specific version of AsyncAPI CLI. Default is latest which is also the recommended option.'
required: false
default: ''
command:
description: 'Command to run. Available commands in action :- generate, validate, convert, optimize and custom. Default is generate. For custom command, provide the whole command as input. List of available commands can be found in https://www.asyncapi.com/docs/tools/cli/usage.'
required: false
default: 'generate'
filepath:
description: 'Path to AsyncAPI document. This input is required if command is set to generate, validate, convert or optimize. Default is ./asyncapi.yaml'
required: false
default: 'asyncapi.yml'
template:
description: 'Template for the generator. Official templates are listed here https://github.com/search?q=topic%3Aasyncapi+topic%3Agenerator+topic%3Atemplate. You can pass template as npm package, url to git repository, link to tar file or local template.'
default: '@asyncapi/markdown-template@0.10.0'
required: false
language:
description: 'Language of the generated code. This input is required if you want to generate models. List of available languages can be found in https://www.asyncapi.com/docs/tools/cli/usage#asyncapi-generate-models-language-file'
required: false
default: ''
output:
description: 'Directory where to put the generated files. Can be used only with generate or convert commands. Default is output.'
required: false
default: 'output'
parameters:
description: 'The command that you use might support and even require specific parameters to be passed to the CLI for the generation. Template parameters should be preceded by -p'
required: false
default: ''
custom_command:
description: 'Custom command to be run. This input is required if command is set to custom.'
required: false
default: ''
runs:
using: 'docker'
# This is the image that will be used to run the action.
# IMPORTANT: The version has to be changed manually in your PRs.
image: 'docker://asyncapi/github-action-for-cli:3.1.1'
args:
- ${{ inputs.cli_version }}
- ${{ inputs.command }}
- ${{ inputs.filepath }}
- ${{ inputs.template }}
- ${{ inputs.language }}
- ${{ inputs.output }}
- ${{ inputs.parameters }}
- ${{ inputs.custom_command }}
branding:
icon: 'file-text'
color: purple
Action ID: marketplace/actions/go-dependency-submission
Author: GitHub
Publisher: actions
Repository: github.com/actions/go-dependency-submission
Calculates dependencies for a Go build-target and submits the list to the Dependency Submission API
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub Personal Access Token (PAT). Defaults to PAT provided by Action runner Default: ${{ github.token }} |
metadata |
Optional | User provided map of max key/value pairs of metadata to include with the snapshot e.g. {"lastModified": "12-31-2022"} |
go-mod-path |
Required | Repo path to the go.mod file used to detect dependencies for the Go build target. Defaults to go.mod in the root of the repository. Default: go.mod |
go-build-target |
Required | Build target to detect build dependencies. If unspecified, will use "all", with will detect all dependencies used in all build targets (including tests and tools). Default: all |
snapshot-sha |
Optional | The SHA that the results will be linked to in the dependency snapshot |
snapshot-ref |
Optional | The ref that the results will be linked to in the dependency snapshot |
detector-name |
Optional | The name of the detector that generated the dependency snapshot |
detector-version |
Optional | The version of the detector that generated the dependency snapshot |
detector-url |
Optional | The URL to the detector that generated the dependency snapshot |
name: 'Go Dependency Submission'
description: 'Calculates dependencies for a Go build-target and submits the list to the Dependency Submission API'
author: 'GitHub'
inputs:
token:
description: "GitHub Personal Access Token (PAT). Defaults to PAT provided by Action runner"
required: false
default: ${{ github.token }}
metadata:
required: false
description: 'User provided map of max key/value pairs of metadata to include with the snapshot e.g. {"lastModified": "12-31-2022"}'
go-mod-path:
required: true
description: 'Repo path to the go.mod file used to detect dependencies for the Go build target. Defaults to go.mod in the root of the repository.'
default: 'go.mod'
go-build-target:
required: true
description: 'Build target to detect build dependencies. If unspecified, will use "all", with will detect all dependencies used in all build targets (including tests and tools).'
default: 'all'
snapshot-sha:
description: The SHA that the results will be linked to in the dependency snapshot
type: string
required: false
default: ''
snapshot-ref:
description: The ref that the results will be linked to in the dependency snapshot
type: string
required: false
default: ''
# If any of detector-name, detector-version, or detector-url are provided, they all have to be provided.
# Defaults will be used if none are not provided. If only one or two are provided, the action will fail.
detector-name:
description: The name of the detector that generated the dependency snapshot
type: string
required: false
default: ''
detector-version:
description: The version of the detector that generated the dependency snapshot
type: string
required: false
default: ''
detector-url:
description: The URL to the detector that generated the dependency snapshot
type: string
required: false
default: ''
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/amirisback/juara-android-compose
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/juara-android-compose
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/wearerequired/auto-merge-dependency-update-action
Author: required gmbh
Publisher: wearerequired
Repository: github.com/wearerequired/auto-merge-dependency-update-action
Enable auto-merge for a PR that only contains dependency updates, based on some rules.
| Name | Required | Description |
|---|---|---|
github-token |
Required | A personal access token used to authenticated with GitHub |
allowed-actors |
Optional | Comma-separated list of usernames auto merge is allowed for Default: dependabot-preview[bot], dependabot[bot] |
allowed-update-types |
Optional | Comma-separated list of types of updates that are allowed. Supported: [devDependencies|dependencies]:[major|minor|patch] Default: devDependencies:minor, devDependencies:patch |
package-block-list |
Optional | Comma-separated list of packages that auto merge should not be allowed for |
merge-method |
Optional | The merge method to use. Supported: MERGE, SQUASH, REBASE Default: SQUASH |
merge-author-email |
Optional | The email address to associate with the auto-merge |
name: 'Auto Merge Dependency Updates'
description: 'Enable auto-merge for a PR that only contains dependency updates, based on some rules.'
author: 'required gmbh'
inputs:
github-token:
description: 'A personal access token used to authenticated with GitHub'
required: true
allowed-actors:
description: 'Comma-separated list of usernames auto merge is allowed for'
default: 'dependabot-preview[bot], dependabot[bot]'
required: false
allowed-update-types:
description: 'Comma-separated list of types of updates that are allowed. Supported: [devDependencies|dependencies]:[major|minor|patch]'
default: 'devDependencies:minor, devDependencies:patch'
required: false
package-block-list:
description: 'Comma-separated list of packages that auto merge should not be allowed for'
required: false
merge-method:
description: 'The merge method to use. Supported: MERGE, SQUASH, REBASE'
default: 'SQUASH'
required: false
merge-author-email:
description: 'The email address to associate with the auto-merge'
required: false
runs:
using: 'node16'
main: 'dist/index.js'
branding:
icon: 'git-merge'
color: 'purple'
Action ID: marketplace/julia-actions/julia-compute-test-matrix
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-compute-test-matrix
Compute a test matrix
| Name | Required | Description |
|---|---|---|
include-release-versions |
Optional | Default: True |
include-lts-versions |
Optional | Default: True |
include-all-compatible-minor-versions |
Optional | |
include-smallest-compatible-minor-versions |
Optional | Default: True |
include-rc-versions |
Optional | |
include-beta-versions |
Optional | |
include-alpha-versions |
Optional | |
include-nightly-versions |
Optional | |
include-windows-x64 |
Optional | Default: True |
include-windows-x86 |
Optional | Default: True |
include-linux-x64 |
Optional | Default: True |
include-linux-x86 |
Optional | Default: True |
include-macos-x64 |
Optional | Default: True |
include-macos-aarch64 |
Optional | Default: True |
| Name | Description |
|---|---|
test-matrix |
name: 'Julia compute test matrix'
description: 'Compute a test matrix'
inputs:
include-release-versions:
type: boolean
required: false
default: true
include-lts-versions:
type: boolean
required: false
default: true
include-all-compatible-minor-versions:
type: boolean
required: false
default: false
include-smallest-compatible-minor-versions:
type: boolean
required: false
default: true
include-rc-versions:
type: boolean
required: false
default: false
include-beta-versions:
type: boolean
required: false
default: false
include-alpha-versions:
type: boolean
required: false
default: false
include-nightly-versions:
type: boolean
required: false
default: false
include-windows-x64:
type: boolean
required: false
default: true
include-windows-x86:
type: boolean
required: false
default: true
include-linux-x64:
type: boolean
required: false
default: true
include-linux-x86:
type: boolean
required: false
default: true
include-macos-x64:
type: boolean
required: false
default: true
include-macos-aarch64:
type: boolean
required: false
default: true
outputs:
test-matrix:
value: ${{ steps.compute-test-matrix.outputs.test-matrix }}
runs:
using: "composite"
steps:
- name: Compute Manifest hash
id: project-hash
shell: pwsh
run: |
$ourHash = Get-FileHash -LiteralPath "$env:GITHUB_ACTION_PATH\Manifest.toml"
"MANIFEST_HASH=$($ourHash.Hash)" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
- name: Check Julia version
shell: bash
id: julia-version
run: |
echo "JULIA_VERSION=$(julia -v)" >> $GITHUB_OUTPUT
- uses: actions/cache@v4
id: cache-project
with:
path: ${{ runner.tool_cache }}/julia-get-compatible-juliaup-channels
key: julia-get-compatible-juliaup-channels-cache-${{ runner.os }}-${{ steps.julia-version.outputs.JULIA_VERSION }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Install and precompile
if: steps.cache-project.outputs.cache-hit != 'true'
run: julia -e 'import Pkg; Pkg.instantiate()'
shell: bash
env:
JULIA_PROJECT: ${{ github.action_path }}
JULIA_DEPOT_PATH: ${{ runner.tool_cache }}/julia-get-compatible-juliaup-channels
- uses: actions/cache/save@v4
if: steps.cache-project.outputs.cache-hit != 'true'
with:
path: ${{ runner.tool_cache }}/julia-get-compatible-juliaup-channels
key: julia-get-compatible-juliaup-channels-cache-${{ runner.os }}-${{ steps.julia-version.outputs.JULIA_VERSION }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Compute compatible Juliaup channels
id: compute-test-matrix
run: julia ${{ github.action_path }}/main.jl
shell: pwsh
env:
JULIA_PROJECT: ${{ github.action_path }}
JULIA_DEPOT_PATH: ${{ runner.tool_cache }}/julia-get-compatible-juliaup-channels
INCLUDE_RELEASE_VERSIONS: ${{ inputs.include-release-versions }}
INCLUDE_LTS_VERSIONS: ${{ inputs.include-lts-versions }}
INCLUDE_ALL_COMPATIBLE_MINOR_VERSIONS: ${{ inputs.include-all-compatible-minor-versions }}
INCLUDE_SMALLEST_COMPATIBLE_MINOR_VERSIONS: ${{ inputs.include-smallest-compatible-minor-versions }}
INCLUDE_RC_VERSIONS: ${{ inputs.include-rc-versions }}
INCLUDE_BETA_VERSIONS: ${{ inputs.include-beta-versions }}
INCLUDE_ALPHA_VERSIONS: ${{ inputs.include-alpha-versions }}
INCLUDE_NIGHTLY_VERSIONS: ${{ inputs.include-nightly-versions }}
INCLUDE_WINDOWS_X64: ${{ inputs.include-windows-x64 }}
INCLUDE_WINDOWS_X86: ${{ inputs.include-windows-x86 }}
INCLUDE_LINUX_X64: ${{ inputs.include-linux-x64 }}
INCLUDE_LINUX_X86: ${{ inputs.include-linux-x86 }}
INCLUDE_MACOS_X64: ${{ inputs.include-macos-x64 }}
INCLUDE_MACOS_AARCH64: ${{ inputs.include-macos-aarch64 }}
Action ID: marketplace/mheap/cascading-merge-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/cascading-merge-action
Automatically merge branches in to each other when a base branch changes
| Name | Required | Description |
|---|---|---|
token |
Required | The GITHUB_TOKEN to use Default: ${{ github.token }} |
branches |
Required | The branches to merge, one per line. The branches at the top will be merged into lower branches |
name: Cascading Merge
description: Automatically merge branches in to each other when a base branch changes
inputs:
token:
description: The GITHUB_TOKEN to use
default: ${{ github.token }}
required: true
branches:
description: The branches to merge, one per line. The branches at the top will be merged into lower branches
required: true
Action ID: marketplace/sobolevn/mypy
Author: Jukka Lehtosalo and contributors
Publisher: sobolevn
Repository: github.com/sobolevn/mypy
Optional Static Typing for Python.
| Name | Required | Description |
|---|---|---|
options |
Optional | Options passed to mypy. Use `mypy --help` to see available options. |
paths |
Optional | Explicit paths to run mypy on. Defaults to the current directory.
Default: . |
version |
Optional | Mypy version to use (PEP440) - e.g. "0.910" |
install_types |
Optional | Whether to automatically install missing library stub packages. ('yes'|'no', default: 'yes')
Default: yes |
install_project_dependencies |
Optional | Whether to attempt to install project dependencies into mypy environment. ('yes'|'no', default: 'yes')
Default: yes |
name: "Mypy"
description: "Optional Static Typing for Python."
author: "Jukka Lehtosalo and contributors"
inputs:
options:
description: >
Options passed to mypy. Use `mypy --help` to see available options.
required: false
paths:
description: >
Explicit paths to run mypy on. Defaults to the current directory.
required: false
default: "."
version:
description: >
Mypy version to use (PEP440) - e.g. "0.910"
required: false
default: ""
install_types:
description: >
Whether to automatically install missing library stub packages.
('yes'|'no', default: 'yes')
default: "yes"
install_project_dependencies:
description: >
Whether to attempt to install project dependencies into mypy
environment. ('yes'|'no', default: 'yes')
default: "yes"
branding:
color: "blue"
icon: "check-circle"
runs:
using: composite
steps:
- name: mypy setup
shell: bash
run: |
echo ::group::Installing mypy...
export PIP_DISABLE_PIP_VERSION_CHECK=1
if [ "$RUNNER_OS" == "Windows" ]; then
HOST_PYTHON=python
else
HOST_PYTHON=python3
fi
venv_script="import os.path; import venv; import sys;
path = os.path.join(r'${{ github.action_path }}', '.mypy-venv');
venv.main([path]);
bin_subdir = 'Scripts' if sys.platform == 'win32' else 'bin';
print(os.path.join(path, bin_subdir, 'python'));
"
VENV_PYTHON=$(echo $venv_script | "$HOST_PYTHON")
mypy_spec="mypy"
if [ -n "${{ inputs.version }}" ]; then
mypy_spec+="==${{ inputs.version }}"
fi
if ! "$VENV_PYTHON" -m pip install "$mypy_spec"; then
echo "::error::Could not install mypy."
exit 1
fi
echo ::endgroup::
if [ "${{ inputs.install_project_dependencies }}" == "yes" ]; then
VENV=$("$VENV_PYTHON" -c 'import sys;print(sys.prefix)')
echo ::group::Installing project dependencies...
"$VENV_PYTHON" -m pip download --dest="$VENV"/deps .
"$VENV_PYTHON" -m pip install -U --find-links="$VENV"/deps "$VENV"/deps/*
echo ::endgroup::
fi
echo ::group::Running mypy...
mypy_opts=""
if [ "${{ inputs.install_types }}" == "yes" ]; then
mypy_opts+="--install-types --non-interactive"
fi
echo "mypy $mypy_opts ${{ inputs.options }} ${{ inputs.paths }}"
"$VENV_PYTHON" -m mypy $mypy_opts ${{ inputs.options }} ${{ inputs.paths }}
echo ::endgroup::
Action ID: marketplace/azure/sql-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/sql-action
Deploy a database project, DACPAC, or a SQL script to Azure SQL database
| Name | Required | Description |
|---|---|---|
connection-string |
Required | The connection string, including authentication information, for the Azure SQL Server database. |
path |
Required | Path to the file used for this action. Supported file types are .sql, .dacpac, or .sqlproj. |
action |
Optional | If not using a .sql file, the sqlpackage action to execute. |
arguments |
Optional | In case of .dacpac or .sqlproj file types, additional sqlpackage arguments that will be applied. In case of .sql file type, additional go-sqlcmd argument that will be applied. |
sqlpackage-path |
Optional | Specify a SqlPackage executable location to override the default locations. |
build-arguments |
Optional | In case of a .sqlproj file, additional arguments that will be applied to dotnet build when building the database project. |
skip-firewall-check |
Optional | Skip the firewall check when connecting to the Azure SQL Server. |
name: 'Azure SQL Deploy'
description: 'Deploy a database project, DACPAC, or a SQL script to Azure SQL database'
inputs:
connection-string:
description: 'The connection string, including authentication information, for the Azure SQL Server database.'
required: true
path:
description: 'Path to the file used for this action. Supported file types are .sql, .dacpac, or .sqlproj.'
required: true
action:
description: 'If not using a .sql file, the sqlpackage action to execute.'
required: false
arguments:
description: 'In case of .dacpac or .sqlproj file types, additional sqlpackage arguments that will be applied. In case of .sql file type, additional go-sqlcmd argument that will be applied.'
required: false
sqlpackage-path:
description: 'Specify a SqlPackage executable location to override the default locations.'
required: false
build-arguments:
description: 'In case of a .sqlproj file, additional arguments that will be applied to dotnet build when building the database project.'
required: false
skip-firewall-check:
description: 'Skip the firewall check when connecting to the Azure SQL Server.'
required: false
default: false
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/MarketingPipeline/OCR-PDF-Action
Author: github.com/MarketingPipeline
Publisher: MarketingPipeline
Repository: github.com/MarketingPipeline/OCR-PDF-Action
Turn scanned PDFs into searchable documents
| Name | Required | Description |
|---|---|---|
input_file |
Required | Input PDF filepath |
output_file |
Required | Output PDF filepath |
name: 'OCR PDF Action'
description: 'Turn scanned PDFs into searchable documents'
author: 'github.com/MarketingPipeline'
inputs:
input_file:
description: 'Input PDF filepath'
default: ''
required: true
output_file:
description: 'Output PDF filepath'
default: ''
required: true
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.input_file }}
- ${{ inputs.output_file }}
branding:
icon: 'activity'
color: 'white'
Action ID: marketplace/julia-actions/julia-codeformat
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-codeformat
A Julia code formatter based on DocumentFormat.jl
name: 'Format Julia code'
description: 'A Julia code formatter based on DocumentFormat.jl'
author: 'David Anthoff'
branding:
icon: 'eye'
color: 'gray-dark'
runs:
using: 'composite'
steps:
- run: julia --color=yes -e 'using Pkg; Pkg.activate("juliaformatter", shared=true); Pkg.add(["DocumentFormat", "FilePaths"]); using DocumentFormat, FilePaths; DocumentFormat.format(p".")'
shell: bash
Action ID: marketplace/pascalgn/size-label-action
Author: Unknown
Publisher: pascalgn
Repository: github.com/pascalgn/size-label-action
Assign labels based on pull request change sizes
| Name | Required | Description |
|---|---|---|
sizes |
Optional | Custom size configuration |
name: "Assign size label"
description: "Assign labels based on pull request change sizes"
inputs:
sizes:
description: "Custom size configuration"
required: false
runs:
using: "node20"
main: "dist/index.js"
branding:
icon: "tag"
color: "blue"
Action ID: marketplace/atlassian/gajira-comment
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-comment
Add a comment to an issue
| Name | Required | Description |
|---|---|---|
issue |
Required | Key of the issue to be commented on |
comment |
Required | Comment |
name: Jira Add Comment
description: Add a comment to an issue
branding:
icon: 'align-left'
color: 'blue'
inputs:
issue:
description: Key of the issue to be commented on
required: true
comment:
description: Comment
required: true
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/google-github-actions/create-cloud-deploy-release
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/create-cloud-deploy-release
Use this action to create a release in Cloud Deploy.
| Name | Required | Description |
|---|---|---|
name |
Required | The name of the release. |
delivery_pipeline |
Required | The delivery pipeline to use for the release. |
region |
Optional | Region in which the delivery pipeline can be found. |
source |
Optional | The location of the files to be included in the release. |
build_artifacts |
Optional | Path to a Skaffold output file containing the details of the image(s) to be released. |
images |
Optional | The images to be released. |
disable_initial_rollout |
Optional | Prevent the release being deployed to the first target in the delivery pipeline. |
gcs_source_staging_dir |
Optional | A directory in Google Cloud Storage to copy the source used for staging the build. |
skaffold_file |
Optional | Path of the skaffold file absolute or relative to the source directory. |
annotations |
Optional | Add additional annotations to the release. |
labels |
Optional | Add additional labels to the release. |
description |
Optional | Include a description of the release. |
deploy_parameters |
Optional | Additional parameters to supply at release creation time. |
project_id |
Optional | The Google Cloud Project ID. If unset, this is inherited from the environment. |
flags |
Optional | Space separated list of other Cloud Deploy flags, examples can be found: https://cloud.google.com/sdk/gcloud/reference/deploy/releases/create#FLAGS Example: '--from-k8s-manifest=manifest.yaml --skaffold-version=skaffold_preview' |
gcloud_version |
Optional | Version of the Cloud SDK to install. If unspecified or set to "latest", the latest available gcloud SDK version for the target platform will be installed. Example: "290.0.1". |
gcloud_component |
Optional | Version of the Cloud SDK components to install and use. If unspecified, the latest or released version will be used. This is the equivalent of running 'gcloud alpha run' or 'gcloud beta run'. Valid values are `alpha` or `beta`. |
| Name | Description |
|---|---|
name |
The full name of the release in Cloud Deploy, including project and pipeline names, as well as the chosen name of the release itself. |
link |
A link to the Cloud Deploy release in the Google Cloud Web Console. |
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Create Cloud Deploy Release'
author: 'Google LLC'
description: |-
Use this action to create a release in Cloud Deploy.
inputs:
name:
description: |-
The name of the release.
required: true
delivery_pipeline:
description: |-
The delivery pipeline to use for the release.
required: true
region:
description: |-
Region in which the delivery pipeline can be found.
required: false
source:
description: |-
The location of the files to be included in the release.
required: false
build_artifacts:
description: |-
Path to a Skaffold output file containing the details of the image(s) to be released.
required: false
images:
description: |-
The images to be released.
required: false
disable_initial_rollout:
description: |-
Prevent the release being deployed to the first target in the delivery pipeline.
default: false
required: false
gcs_source_staging_dir:
description: |-
A directory in Google Cloud Storage to copy the source used for staging the build.
required: false
skaffold_file:
description: |-
Path of the skaffold file absolute or relative to the source directory.
required: false
annotations:
description: |-
Add additional annotations to the release.
required: false
labels:
description: |-
Add additional labels to the release.
required: false
description:
description: |-
Include a description of the release.
required: false
deploy_parameters:
description: |-
Additional parameters to supply at release creation time.
required: false
project_id:
description: |-
The Google Cloud Project ID. If unset, this is inherited from the environment.
required: false
flags:
description: |-
Space separated list of other Cloud Deploy flags, examples can be found:
https://cloud.google.com/sdk/gcloud/reference/deploy/releases/create#FLAGS
Example: '--from-k8s-manifest=manifest.yaml --skaffold-version=skaffold_preview'
required: false
gcloud_version:
description: |-
Version of the Cloud SDK to install. If unspecified or set to "latest",
the latest available gcloud SDK version for the target platform will be
installed. Example: "290.0.1".
required: false
gcloud_component:
description: |-
Version of the Cloud SDK components to install and use. If unspecified, the latest
or released version will be used. This is the equivalent of running
'gcloud alpha run' or 'gcloud beta run'. Valid values are `alpha` or `beta`.
required: false
outputs:
name:
description: |-
The full name of the release in Cloud Deploy, including project and
pipeline names, as well as the chosen name of the release itself.
link:
description: |-
A link to the Cloud Deploy release in the Google Cloud Web Console.
branding:
icon: 'chevrons-right'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/dflook/tofu-output
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-output
Retrieve the root-level outputs from an OpenTofu configuration.
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu root module directory. Default: . |
workspace |
Optional | OpenTofu workspace to get outputs from Default: default |
variables |
Optional | Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
json_output_path |
This is the path to all the root module outputs in a JSON file. The path is relative to the Actions workspace. For example, with the OpenTofu config: ```hcl output "service_hostname" { value = "example.com" } ``` The file pointed to by this output will contain: ```json { "service_hostname": "example.com" } ``` OpenTofu list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object. |
name: tofu-output
description: Retrieve the root-level outputs from an OpenTofu configuration.
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu root module directory.
required: false
default: "."
workspace:
description: OpenTofu workspace to get outputs from
required: false
default: "default"
variables:
description: |
Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
outputs:
json_output_path:
description: |
This is the path to all the root module outputs in a JSON file.
The path is relative to the Actions workspace.
For example, with the OpenTofu config:
```hcl
output "service_hostname" {
value = "example.com"
}
```
The file pointed to by this output will contain:
```json
{
"service_hostname": "example.com"
}
```
OpenTofu list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/output.sh
branding:
icon: globe
color: purple
Action ID: marketplace/github/lock
Author: Grant Birkinbine
Publisher: github
Repository: github.com/github/lock
Lock Action to support deployment locking for the branch-deploy Action
| Name | Required | Description |
|---|---|---|
github_token |
Required | The GitHub token used to create an authenticated client - Provided for you by default! Default: ${{ github.token }} |
environment |
Required | The explict environment to apply locking actions to when running in headless mode - OR the default environment to use when running in the context of IssueOps commands in a comment - Examples: production, development, etc - Use "global" for the global lock environment Default: production |
environment_targets |
Optional | Optional (or additional) target environments to select for use with lock/unlock. Example, "production,development,staging". Example usage: `.lock development`, `.lock production`, `.unlock staging` Default: production,development,staging |
lock_trigger |
Optional | The string to look for in comments as an IssueOps lock trigger. Used for locking branch deployments on a specific branch. Example: "lock" Default: .lock |
unlock_trigger |
Optional | The string to look for in comments as an IssueOps unlock trigger. Used for unlocking branch deployments. Example: "unlock" Default: .unlock |
reaction |
Optional | If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket" or "eyes" Default: eyes |
lock_info_alias |
Optional | An alias or shortcut to get details about the current lock (if it exists) Example: ".info" Default: .wcid |
global_lock_flag |
Optional | The flag to pass into the lock command to lock all environments. Example: "--global" Default: --global |
prefix_only |
Optional | If "false", the trigger can match anywhere in the comment Default: true |
mode |
Optional | The mode to use "lock", "unlock", or "check". If not provided, the default mode assumes the workflow is not headless and triggered by a comment on a pull request - Example: .lock / .unlock |
reason |
Optional | Reason for claiming the deployment lock for this repository |
| Name | Description |
|---|---|
triggered |
The string "true" if the trigger was found, otherwise the string "false" |
comment_id |
The comment id which triggered this deployment (if it was not headless) |
type |
The type of trigger which was found - 'lock', 'unlock', or 'info-info-alias' |
comment_body |
The comment body which triggered this action (if it was not headless) |
headless |
The string "true" if the run was headless, otherwise the string "false" - Headless in this context would be if the "mode" was set and the Action was not invoked by a comment on a pull request |
locked |
If the mode is set to "check", this output will be "true" if a lock exists, otherwise "false" |
lock_environment |
When running in headless mode and the "mode" is set to "check", this output will be the environment name that holds the lock, otherwise it will be empty |
branch |
If the mode is set to "check", this output will be the branch name that holds the lock, otherwise it will be empty |
created_by |
If the mode is set to "check", this output will be the user that holds the lock, otherwise it will be empty |
created_at |
If the mode is set to "check", this output will be the ISO 8601 format date that the lock was claimed, otherwise it will be empty |
reason |
If the mode is set to "check", this output will be the reason the deployment lock was claimed, otherwise it will be empty |
link |
If the mode is set to "check", this output will be the link to the GitHub issue comment or action run that claimed the lock, otherwise it will be empty |
global_lock_claimed |
The string "true" if the global lock was claimed |
global_lock_released |
The string "true" if the global lock was released |
name: "deploy-lock"
description: "Lock Action to support deployment locking for the branch-deploy Action"
author: "Grant Birkinbine"
branding:
icon: 'lock'
color: 'gray-dark'
inputs:
github_token:
description: The GitHub token used to create an authenticated client - Provided for you by default!
default: ${{ github.token }}
required: true
environment:
description: 'The explict environment to apply locking actions to when running in headless mode - OR the default environment to use when running in the context of IssueOps commands in a comment - Examples: production, development, etc - Use "global" for the global lock environment'
required: true
default: "production"
environment_targets:
description: 'Optional (or additional) target environments to select for use with lock/unlock. Example, "production,development,staging". Example usage: `.lock development`, `.lock production`, `.unlock staging`'
required: false
default: "production,development,staging"
lock_trigger:
description: 'The string to look for in comments as an IssueOps lock trigger. Used for locking branch deployments on a specific branch. Example: "lock"'
required: false
default: ".lock"
unlock_trigger:
description: 'The string to look for in comments as an IssueOps unlock trigger. Used for unlocking branch deployments. Example: "unlock"'
required: false
default: ".unlock"
reaction:
description: 'If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket" or "eyes"'
required: false
default: "eyes"
lock_info_alias:
description: 'An alias or shortcut to get details about the current lock (if it exists) Example: ".info"'
required: false
default: ".wcid"
global_lock_flag:
description: 'The flag to pass into the lock command to lock all environments. Example: "--global"'
required: false
default: "--global"
prefix_only:
description: 'If "false", the trigger can match anywhere in the comment'
required: false
default: "true"
mode:
description: 'The mode to use "lock", "unlock", or "check". If not provided, the default mode assumes the workflow is not headless and triggered by a comment on a pull request - Example: .lock / .unlock'
required: false
reason:
description: Reason for claiming the deployment lock for this repository
required: false
outputs:
triggered:
description: 'The string "true" if the trigger was found, otherwise the string "false"'
comment_id:
description: The comment id which triggered this deployment (if it was not headless)
type:
description: The type of trigger which was found - 'lock', 'unlock', or 'info-info-alias'
comment_body:
description: The comment body which triggered this action (if it was not headless)
headless:
description: 'The string "true" if the run was headless, otherwise the string "false" - Headless in this context would be if the "mode" was set and the Action was not invoked by a comment on a pull request'
locked:
description: 'If the mode is set to "check", this output will be "true" if a lock exists, otherwise "false"'
lock_environment:
description: When running in headless mode and the "mode" is set to "check", this output will be the environment name that holds the lock, otherwise it will be empty
branch:
description: 'If the mode is set to "check", this output will be the branch name that holds the lock, otherwise it will be empty'
created_by:
description: 'If the mode is set to "check", this output will be the user that holds the lock, otherwise it will be empty'
created_at:
description: 'If the mode is set to "check", this output will be the ISO 8601 format date that the lock was claimed, otherwise it will be empty'
reason:
description: 'If the mode is set to "check", this output will be the reason the deployment lock was claimed, otherwise it will be empty'
link:
description: 'If the mode is set to "check", this output will be the link to the GitHub issue comment or action run that claimed the lock, otherwise it will be empty'
global_lock_claimed:
description: 'The string "true" if the global lock was claimed'
global_lock_released:
description: 'The string "true" if the global lock was released'
runs:
using: "node24"
main: "dist/index.js"
Action ID: marketplace/goto-bus-stop/standard-action
Author: goto-bus-stop
Publisher: goto-bus-stop
Repository: github.com/goto-bus-stop/standard-action
Check that your code follows Standard Style
| Name | Required | Description |
|---|---|---|
files |
Optional | Globs to lint. |
version |
Required | Output format style, see https://eslint.org/docs/user-guide/formatters/. Default `stylish`. Default: stylish |
linter |
Required | Standard-compatible linter variant to use. ex. semistandard, happiness, doublestandard... Default: standard |
annotate |
Optional | Annotate the diff UI with lint errors. Requires passing in env.GITHUB_TOKEN. |
name: 'StandardJS lint checks'
description: 'Check that your code follows Standard Style'
branding:
icon: 'code'
color: 'yellow'
author: 'goto-bus-stop'
inputs:
files:
description: 'Globs to lint.'
# let standard pick the default list
# default: []
version:
description: 'Output format style, see https://eslint.org/docs/user-guide/formatters/. Default `stylish`.'
required: true
default: 'stylish'
linter:
description: 'Standard-compatible linter variant to use. ex. semistandard, happiness, doublestandard...'
required: true
default: 'standard'
annotate:
description: 'Annotate the diff UI with lint errors. Requires passing in env.GITHUB_TOKEN.'
default: false
runs:
using: 'node12'
main: 'index.js'
Action ID: marketplace/actions/cache
Author: GitHub
Publisher: actions
Repository: github.com/actions/cache
Cache artifacts like dependencies and build outputs to improve workflow execution time
| Name | Required | Description |
|---|---|---|
path |
Required | A list of files, directories, and wildcard patterns to cache and restore |
key |
Required | An explicit key for restoring and saving the cache |
restore-keys |
Optional | An ordered multiline string listing the prefix-matched keys, that are used for restoring stale cache if no cache hit occurred for key. Note `cache-hit` returns false in this case. |
upload-chunk-size |
Optional | The chunk size used to split up large files during upload, in bytes |
enableCrossOsArchive |
Optional | An optional boolean when enabled, allows windows runners to save or restore caches that can be restored or saved respectively on other platforms Default: false |
fail-on-cache-miss |
Optional | Fail the workflow if cache entry is not found Default: false |
lookup-only |
Optional | Check if a cache entry exists for the given input(s) (key, restore-keys) without downloading the cache Default: false |
save-always |
Optional | Run the post step to save the cache even if another step before fails Default: false |
| Name | Description |
|---|---|
cache-hit |
A boolean value to indicate an exact match was found for the primary key |
name: 'Cache'
description: 'Cache artifacts like dependencies and build outputs to improve workflow execution time'
author: 'GitHub'
inputs:
path:
description: 'A list of files, directories, and wildcard patterns to cache and restore'
required: true
key:
description: 'An explicit key for restoring and saving the cache'
required: true
restore-keys:
description: 'An ordered multiline string listing the prefix-matched keys, that are used for restoring stale cache if no cache hit occurred for key. Note `cache-hit` returns false in this case.'
required: false
upload-chunk-size:
description: 'The chunk size used to split up large files during upload, in bytes'
required: false
enableCrossOsArchive:
description: 'An optional boolean when enabled, allows windows runners to save or restore caches that can be restored or saved respectively on other platforms'
default: 'false'
required: false
fail-on-cache-miss:
description: 'Fail the workflow if cache entry is not found'
default: 'false'
required: false
lookup-only:
description: 'Check if a cache entry exists for the given input(s) (key, restore-keys) without downloading the cache'
default: 'false'
required: false
save-always:
description: 'Run the post step to save the cache even if another step before fails'
default: 'false'
required: false
deprecationMessage: |
save-always does not work as intended and will be removed in a future release.
A separate `actions/cache/restore` step should be used instead.
See https://github.com/actions/cache/tree/main/save#always-save-cache for more details.
outputs:
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'
runs:
using: 'node24'
main: 'dist/restore/index.js'
post: 'dist/save/index.js'
post-if: "success()"
branding:
icon: 'archive'
color: 'gray-dark'
Action ID: marketplace/github/codeql-action
Author: GitHub
Publisher: github
Repository: github.com/github/codeql-action
Stub: Don't use this action directly. Read [the documentation](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) instead.
name: 'CodeQL: Stub'
description: "Stub: Don't use this action directly. Read [the documentation](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) instead."
author: 'GitHub'
runs:
using: 'composite'
steps:
- name: 'Stub'
run: |
echo 'This is a stub. Read [the documentation](https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql) instead.'
exit 1
shell: bash
Action ID: marketplace/appleboy/ChatGPT-CodeReview
Author: Unknown
Publisher: appleboy
Repository: github.com/appleboy/ChatGPT-CodeReview
A Code Review Action Powered By ChatGPT
name: ChatGPT CodeReviewer
description: 'A Code Review Action Powered By ChatGPT'
branding:
icon: 'gift'
color: orange
runs:
using: 'node16'
main: 'action/index.cjs'
Action ID: marketplace/julia-actions/setup-julia
Author: Sascha Mann
Publisher: julia-actions
Repository: github.com/julia-actions/setup-julia
Setup a Julia environment and add it to the PATH
| Name | Required | Description |
|---|---|---|
version |
Optional | The Julia version to download (if necessary) and use. Use a string input to avoid unwanted decimal conversion e.g. 1.10 without quotes will be interpreted as 1.1. Examples: "1", "1.10", "lts", "pre" Default: 1 |
include-all-prereleases |
Optional | Include prereleases when matching the Julia version to available versions. Default: false |
arch |
Optional | Architecture of the Julia binaries. Defaults to the architecture of the runner executing the job. Default: default |
show-versioninfo |
Optional | Display InteractiveUtils.versioninfo() after installing Default: false |
project |
Optional | The path to the project directory or file to use when resolving some versions (e.g. min) |
| Name | Description |
|---|---|
julia-version |
The installed Julia version. May vary from the version input if a version range was given as input. |
julia-bindir |
Path to the directory containing the Julia executable. Equivalent to JULIA_BINDIR: https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_BINDIR |
name: 'Setup Julia environment'
description: 'Setup a Julia environment and add it to the PATH'
author: 'Sascha Mann'
inputs:
version:
description: 'The Julia version to download (if necessary) and use. Use a string input to avoid unwanted decimal conversion e.g. 1.10 without quotes will be interpreted as 1.1. Examples: "1", "1.10", "lts", "pre"'
default: '1'
include-all-prereleases:
description: 'Include prereleases when matching the Julia version to available versions.'
required: false
default: 'false'
arch:
description: 'Architecture of the Julia binaries. Defaults to the architecture of the runner executing the job.'
required: false
default: 'default'
show-versioninfo:
description: 'Display InteractiveUtils.versioninfo() after installing'
required: false
default: 'false'
project:
description: 'The path to the project directory or file to use when resolving some versions (e.g. min)'
required: false
default: '' # Special value which fallsback to using JULIA_PROJECT if defined, otherwise "."
outputs:
julia-version:
description: 'The installed Julia version. May vary from the version input if a version range was given as input.'
julia-bindir:
description: 'Path to the directory containing the Julia executable. Equivalent to JULIA_BINDIR: https://docs.julialang.org/en/v1/manual/environment-variables/#JULIA_BINDIR'
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'download'
color: 'green'
Action ID: marketplace/pascalgn/npm-publish-action
Author: pascalgn
Publisher: pascalgn
Repository: github.com/pascalgn/npm-publish-action
Automatically publish new versions to npm
| Name | Required | Description |
|---|---|---|
commit_pattern |
Optional | The pattern that matches version update commits. |
tag_name |
Optional | The name of the tag that you want to create for the version. |
tag_message |
Optional | The message of the tag that you want to create for the version. |
create_tag |
Optional | Whether to create a git tag or not. |
workspace |
Optional | Custom workspace directory that contains the package.json file. |
publish_command |
Optional | Custom publish command. |
publish_args |
Optional | Publish command arguments. |
| Name | Description |
|---|---|
changed |
Whether the version has changed in the examined commits |
version |
The detected version number |
commit |
The SHA of the commit where the version change has been detected |
name: Publish to npm
author: pascalgn
description: Automatically publish new versions to npm
inputs:
commit_pattern:
description: The pattern that matches version update commits.
required: false
tag_name:
description: The name of the tag that you want to create for the version.
required: false
tag_message:
description: The message of the tag that you want to create for the version.
required: false
create_tag:
description: Whether to create a git tag or not.
required: false
workspace:
description: Custom workspace directory that contains the package.json file.
required: false
publish_command:
description: Custom publish command.
required: false
publish_args:
description: Publish command arguments.
required: false
outputs:
changed:
description: Whether the version has changed in the examined commits
version:
description: The detected version number
commit:
description: The SHA of the commit where the version change has been detected
runs:
using: docker
image: Dockerfile
branding:
icon: package
color: blue
Action ID: marketplace/actions/configure-pages
Author: GitHub
Publisher: actions
Repository: github.com/actions/configure-pages
A GitHub Action to enable Pages, extract various metadata about a site, and configure some supported static site generators.
| Name | Required | Description |
|---|---|---|
static_site_generator |
Optional | Optional static site generator to attempt to configure: "nuxt", "next", "gatsby", or "sveltekit" |
generator_config_file |
Optional | Optional file path to static site generator configuration file |
token |
Required | GitHub token Default: ${{ github.token }} |
enablement |
Optional | Try to enable Pages for the repository if it is not already enabled. This option requires a token other than `GITHUB_TOKEN` to be provided. In the context of a Personal Access Token, the `repo` scope or Pages write permission is required. In the context of a GitHub App, the `administration:write` and `pages:write` permissions are required. Default: false |
| Name | Description |
|---|---|
base_url |
GitHub Pages site full base URL. Examples: "https://octocat.github.io/my-repo", "https://octocat.github.io", "https://www.example.com" |
origin |
GitHub Pages site origin. Examples: "https://octocat.github.io", "https://www.example.com" |
host |
GitHub Pages site host. Examples: "octocat.github.io", "www.example.com" |
base_path |
GitHub Pages site full base path. Examples: "/my-repo" or "" |
name: 'Configure GitHub Pages'
description: 'A GitHub Action to enable Pages, extract various metadata about a site, and configure some supported static site generators.'
author: 'GitHub'
runs:
using: 'node20'
main: 'dist/index.js'
inputs:
static_site_generator:
description: 'Optional static site generator to attempt to configure: "nuxt", "next", "gatsby", or "sveltekit"'
required: false
generator_config_file:
description: 'Optional file path to static site generator configuration file'
required: false
token:
description: 'GitHub token'
default: ${{ github.token }}
required: true
enablement:
description: 'Try to enable Pages for the repository if it is not already enabled. This option requires a token other than `GITHUB_TOKEN` to be provided. In the context of a Personal Access Token, the `repo` scope or Pages write permission is required. In the context of a GitHub App, the `administration:write` and `pages:write` permissions are required.'
default: 'false'
required: false
outputs:
base_url:
description: 'GitHub Pages site full base URL. Examples: "https://octocat.github.io/my-repo", "https://octocat.github.io", "https://www.example.com"'
origin:
description: 'GitHub Pages site origin. Examples: "https://octocat.github.io", "https://www.example.com"'
host:
description: 'GitHub Pages site host. Examples: "octocat.github.io", "www.example.com"'
base_path:
description: 'GitHub Pages site full base path. Examples: "/my-repo" or ""'
Action ID: marketplace/lukka/get-action-usage
Author: Luca Cappa https://github.com/lukka
Publisher: lukka
Repository: github.com/lukka/get-action-usage
Search in public GitHub repositories for usage of get-cmake, run-cmake and run-vcpkg GitHub Actions and creates a D3.js based graph.'
# Copyright (c) 2022-2023-2024-2025 Luca Cappa
# Released under the term specified in file LICENSE
# SPDX short identifier: CC-BY-SA-4.0
name: 'get-action-usage'
description: Search in public GitHub repositories for usage of get-cmake, run-cmake and run-vcpkg GitHub Actions and creates a D3.js based graph.'
author: 'Luca Cappa https://github.com/lukka'
# inputs:
# no inputs, no outputs!
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'terminal'
color: 'green'
Action ID: marketplace/google-github-actions/auth
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/auth
Authenticate to Google Cloud from GitHub Actions via Workload Identity Federation or service account keys.
| Name | Required | Description |
|---|---|---|
project_id |
Optional | ID of the default project to use for future API calls and invocations. If unspecified, this action will attempt to extract the value from other inputs such as "service_account" or "credentials_json". |
workload_identity_provider |
Optional | The full identifier of the Workload Identity Provider, including the project number, pool name, and provider name. If provided, this must be the full identifier which includes all parts, for example: "projects/123456789/locations/global/workloadIdentityPools/my-pool/providers/my-provider". This is mutually exclusive with "credentials_json". |
service_account |
Optional | Email address or unique identifier of the Google Cloud service account for which to generate credentials. This is required if "workload_identity_provider" is specified. |
audience |
Optional | The value for the audience (aud) parameter in GitHub's generated OIDC token. This value defaults to the value of "workload_identity_provider", which is also the default value Google Cloud expects for the audience parameter on the token. |
credentials_json |
Optional | The Google Cloud JSON service account key to use for authentication. This is mutually exclusive with "workload_identity_provider". |
create_credentials_file |
Optional | If true, the action will securely generate a credentials file which can be
used for authentication via gcloud and Google Cloud SDKs. Default: true |
export_environment_variables |
Optional | If true, the action will export common environment variables which are
known to be consumed by popular downstream libraries and tools, including:
- CLOUDSDK_PROJECT
- CLOUDSDK_CORE_PROJECT
- GCP_PROJECT
- GCLOUD_PROJECT
- GOOGLE_CLOUD_PROJECT
If "create_credentials_file" is true, additional environment variables are
exported:
- CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE
- GOOGLE_APPLICATION_CREDENTIALS
- GOOGLE_GHA_CREDS_PATH
If false, the action will not export any environment variables, meaning
future steps are unlikely to be automatically authenticated to Google
Cloud. Default: true |
token_format |
Optional | Output format for the generated authentication token. For OAuth 2.0 access tokens, specify "access_token". For OIDC tokens, specify "id_token". To skip token generation, leave this value empty. |
delegates |
Optional | List of additional service account emails or unique identities to use for impersonation in the chain. |
universe |
Optional | The Google Cloud universe to use for constructing API endpoints. The
default universe is "googleapis.com", which corresponds to
https://cloud.google.com. Trusted Partner Cloud and Google Distributed
Hosted Cloud should set this to their universe address. Default: googleapis.com |
request_reason |
Optional | An optional Reason Request System Parameter for each API call made by the GitHub Action. This will inject the "X-Goog-Request-Reason" HTTP header, which will provide user-supplied information in Google Cloud audit logs. |
cleanup_credentials |
Optional | If true, the action will remove any created credentials from the
filesystem upon completion. This only applies if "create_credentials_file"
is true. Default: true |
access_token_lifetime |
Optional | Desired lifetime duration of the access token, in seconds. This must be
specified as the number of seconds with a trailing "s" (e.g. 30s). This is
only valid when "token_format" is "access_token". Default: 3600s |
access_token_scopes |
Optional | List of OAuth 2.0 access scopes to be included in the generated token.
This is only valid when "token_format" is "access_token". Default: https://www.googleapis.com/auth/cloud-platform |
access_token_subject |
Optional | Email address of a user to impersonate for Domain-Wide Delegation Access tokens created for Domain-Wide Delegation cannot have a lifetime beyond 1 hour. This is only valid when "token_format" is "access_token". |
id_token_audience |
Optional | The audience (aud) for the generated Google Cloud ID Token. This is only valid when "token_format" is "id_token". |
id_token_include_email |
Optional | Optional parameter of whether to include the service account email in the
generated token. If true, the token will contain "email" and
"email_verified" claims. This is only valid when "token_format" is
"id_token". Default: false |
| Name | Description |
|---|---|
project_id |
Provided or extracted value for the Google Cloud project ID. |
credentials_file_path |
Path on the local filesystem where the generated credentials file resides. This is only available if "create_credentials_file" was set to true. |
auth_token |
The intermediate authentication token, which could be used to call other Google Cloud APIs, depending on how you configured IAM. |
access_token |
The Google Cloud access token for calling other Google Cloud APIs. This is only available when "token_format" is "access_token". |
id_token |
The Google Cloud ID token. This is only available when "token_format" is "id_token". |
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Authenticate to Google Cloud'
author: 'Google LLC'
description: |-
Authenticate to Google Cloud from GitHub Actions via Workload Identity
Federation or service account keys.
inputs:
project_id:
description: |-
ID of the default project to use for future API calls and invocations. If
unspecified, this action will attempt to extract the value from other
inputs such as "service_account" or "credentials_json".
required: false
workload_identity_provider:
description: |-
The full identifier of the Workload Identity Provider, including the
project number, pool name, and provider name. If provided, this must be
the full identifier which includes all parts, for example:
"projects/123456789/locations/global/workloadIdentityPools/my-pool/providers/my-provider".
This is mutually exclusive with "credentials_json".
required: false
service_account:
description: |-
Email address or unique identifier of the Google Cloud service account for
which to generate credentials. This is required if
"workload_identity_provider" is specified.
required: false
audience:
description: |-
The value for the audience (aud) parameter in GitHub's generated OIDC
token. This value defaults to the value of "workload_identity_provider",
which is also the default value Google Cloud expects for the audience
parameter on the token.
default: ''
required: false
credentials_json:
description: |-
The Google Cloud JSON service account key to use for authentication. This
is mutually exclusive with "workload_identity_provider".
required: false
create_credentials_file:
description: |-
If true, the action will securely generate a credentials file which can be
used for authentication via gcloud and Google Cloud SDKs.
default: 'true'
required: false
export_environment_variables:
description: |-
If true, the action will export common environment variables which are
known to be consumed by popular downstream libraries and tools, including:
- CLOUDSDK_PROJECT
- CLOUDSDK_CORE_PROJECT
- GCP_PROJECT
- GCLOUD_PROJECT
- GOOGLE_CLOUD_PROJECT
If "create_credentials_file" is true, additional environment variables are
exported:
- CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE
- GOOGLE_APPLICATION_CREDENTIALS
- GOOGLE_GHA_CREDS_PATH
If false, the action will not export any environment variables, meaning
future steps are unlikely to be automatically authenticated to Google
Cloud.
default: 'true'
required: false
token_format:
description: |-
Output format for the generated authentication token. For OAuth 2.0 access
tokens, specify "access_token". For OIDC tokens, specify "id_token". To
skip token generation, leave this value empty.
default: ''
required: false
delegates:
description: |-
List of additional service account emails or unique identities to use for
impersonation in the chain.
default: ''
required: false
universe:
description: |-
The Google Cloud universe to use for constructing API endpoints. The
default universe is "googleapis.com", which corresponds to
https://cloud.google.com. Trusted Partner Cloud and Google Distributed
Hosted Cloud should set this to their universe address.
required: false
default: 'googleapis.com'
request_reason:
description: |-
An optional Reason Request System Parameter for each API call made by the
GitHub Action. This will inject the "X-Goog-Request-Reason" HTTP header,
which will provide user-supplied information in Google Cloud audit logs.
required: false
cleanup_credentials:
description: |-
If true, the action will remove any created credentials from the
filesystem upon completion. This only applies if "create_credentials_file"
is true.
default: 'true'
required: false
# access token params
access_token_lifetime:
description: |-
Desired lifetime duration of the access token, in seconds. This must be
specified as the number of seconds with a trailing "s" (e.g. 30s). This is
only valid when "token_format" is "access_token".
default: '3600s'
required: false
access_token_scopes:
description: |-
List of OAuth 2.0 access scopes to be included in the generated token.
This is only valid when "token_format" is "access_token".
default: 'https://www.googleapis.com/auth/cloud-platform'
required: false
access_token_subject:
description: |-
Email address of a user to impersonate for Domain-Wide Delegation Access
tokens created for Domain-Wide Delegation cannot have a lifetime beyond 1
hour. This is only valid when "token_format" is "access_token".
default: ''
required: false
# id token params
id_token_audience:
description: |-
The audience (aud) for the generated Google Cloud ID Token. This is only
valid when "token_format" is "id_token".
default: ''
required: false
id_token_include_email:
description: |-
Optional parameter of whether to include the service account email in the
generated token. If true, the token will contain "email" and
"email_verified" claims. This is only valid when "token_format" is
"id_token".
default: 'false'
required: false
outputs:
project_id:
description: |-
Provided or extracted value for the Google Cloud project ID.
credentials_file_path:
description: |-
Path on the local filesystem where the generated credentials file resides.
This is only available if "create_credentials_file" was set to true.
auth_token:
description: |-
The intermediate authentication token, which could be used to call other
Google Cloud APIs, depending on how you configured IAM.
access_token:
description: |-
The Google Cloud access token for calling other Google Cloud APIs. This is
only available when "token_format" is "access_token".
id_token:
description: |-
The Google Cloud ID token. This is only available when "token_format" is
"id_token".
branding:
icon: 'lock'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
post: 'dist/post/index.js'
Action ID: marketplace/sobolevn/create-envfile
Author: Forest Anderson
Publisher: sobolevn
Repository: github.com/sobolevn/create-envfile
Github Action to create a .env file with Github Secrets
| Name | Required | Description |
|---|---|---|
file_name |
Optional | The filename for the envfile Default: .env |
directory |
Optional | The directory to put the envfile in |
name: 'Create .env file'
description: 'Github Action to create a .env file with Github Secrets'
author: 'Forest Anderson'
branding:
icon: 'briefcase'
color: 'gray-dark'
inputs:
file_name:
description: 'The filename for the envfile'
default: '.env'
directory:
description: 'The directory to put the envfile in'
default: ''
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/gradle/update-jdks-action
Author: Gradle Inc.
Publisher: gradle
Repository: github.com/gradle/update-jdks-action
Update .teamcity/jdks.yaml with the latest JDK versions
| Name | Required | Description |
|---|---|---|
token |
Required | The GitHub authentication token Default: ${{ github.token }} |
name: 'Update JDKs'
description: 'Update .teamcity/jdks.yaml with the latest JDK versions'
author: 'Gradle Inc.'
inputs:
token:
required: true
description: 'The GitHub authentication token'
default: ${{ github.token }}
runs:
using: node20
main: dist/index.js
Action ID: marketplace/yeslayla/run-gut-tests-action
Author: josephbmanley
Publisher: yeslayla
Repository: github.com/yeslayla/run-gut-tests-action
Run GUT tests for a Godot project
| Name | Required | Description |
|---|---|---|
containerImage |
Optional | The container to run tests inside of. Default: barichello/godot-ci:latest |
useContainer |
Optional | Boolean value of whether or not to run container. Default: True |
godotExecutable |
Optional | Path of Godot binary to call when running GUT tests. Default: godot |
directory |
Optional | The name directory to run tests in. |
name: "Run GUT Tests"
description: "Run GUT tests for a Godot project"
author: "josephbmanley"
inputs:
containerImage:
description: "The container to run tests inside of."
default: "barichello/godot-ci:latest"
useContainer:
description: "Boolean value of whether or not to run container."
default: true
godotExecutable:
description: "Path of Godot binary to call when running GUT tests."
default: godot
directory:
description: "The name directory to run tests in."
runs:
using: "node12"
main: "dist/index.js"
branding:
icon: cpu
color: yellow
Action ID: marketplace/azure/aks-set-context
Author: Unknown
Publisher: azure
Repository: github.com/azure/aks-set-context
Sets the kubeconfig on the machine to communicate with the Azure Kubernetes cluster.
| Name | Required | Description |
|---|---|---|
resource-group |
Required | Resource Group Name |
cluster-name |
Required | AKS Cluster Name |
subscription |
Optional | AKS Cluster Subscription |
admin |
Optional | Get cluster admin credentials. Values: true or false |
use-kubelogin |
Optional | Enables kubelogin for non-admin user scenario. Values: true or false |
public-fqdn |
Optional | Get private cluster credential with server address to be public fqdn. Values: true or false |
resource-type |
Optional | Either Microsoft.ContainerService/managedClusters or Microsoft.ContainerService/fleets'. Default: Microsoft.ContainerService/managedClusters |
name: 'Azure Kubernetes set context'
description: 'Sets the kubeconfig on the machine to communicate with the Azure Kubernetes cluster.'
# Azure/login must be run before this action
inputs:
resource-group:
description: 'Resource Group Name'
required: true
cluster-name:
description: 'AKS Cluster Name'
required: true
subscription:
description: 'AKS Cluster Subscription'
required: false
admin:
description: 'Get cluster admin credentials. Values: true or false'
default: false
required: false
use-kubelogin:
description: 'Enables kubelogin for non-admin user scenario. Values: true or false'
default: false
required: false
public-fqdn:
description: 'Get private cluster credential with server address to be public fqdn. Values: true or false'
default: false
required: false
resource-type:
description: Either Microsoft.ContainerService/managedClusters or Microsoft.ContainerService/fleets'.
required: false
default: 'Microsoft.ContainerService/managedClusters'
branding:
color: 'green'
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/docker/cagent-action
Author: Docker
Publisher: docker
Repository: github.com/docker/cagent-action
Run a CAgent AI agent with a single line
| Name | Required | Description |
|---|---|---|
agent |
Required | Agent identifier (e.g., docker/code-analyzer from Docker Hub, or path to .yaml file) |
prompt |
Optional | Prompt to pass to the agent. If not provided, uses a default based on the agent type |
cagent-version |
Optional | Version of cagent to use Default: v1.9.12 |
mcp-gateway |
Optional | Install mcp-gateway (true/false) Default: false |
mcp-gateway-version |
Optional | Version of mcp-gateway to use (specifying this will enable mcp-gateway installation) Default: v0.22.0 |
anthropic-api-key |
Optional | Anthropic API key (defaults to ANTHROPIC_API_KEY secret) |
openai-api-key |
Optional | OpenAI API key (defaults to OPENAI_API_KEY secret) |
google-api-key |
Optional | Google API key for Gemini (defaults to GOOGLE_API_KEY secret) |
github-token |
Optional | GitHub token for API access (defaults to GITHUB_TOKEN env var) |
timeout |
Optional | Timeout in seconds for agent execution (0 for no timeout) Default: 0 |
debug |
Optional | Enable debug mode with verbose logging (true/false) Default: false |
working-directory |
Optional | Working directory to run the agent in Default: . |
yolo |
Optional | Enable yolo mode - auto-approve all prompts (true/false) Default: true |
extra-args |
Optional | Additional arguments to pass to cagent exec |
| Name | Description |
|---|---|
exit-code |
Exit code from cagent exec |
output-file |
Path to the output log file |
cagent-version |
Version of cagent that was used |
mcp-gateway-installed |
Whether mcp-gateway was installed (true/false) |
execution-time |
Agent execution time in seconds |
security-blocked |
Whether execution was blocked due to security concerns |
secrets-detected |
Whether secrets were detected in output |
prompt-suspicious |
Whether suspicious patterns were detected in user prompt |
input-risk-level |
Risk level of input (low/medium/high) |
name: "CAgent Runner"
description: "Run a CAgent AI agent with a single line"
author: "Docker"
branding:
icon: "cpu"
color: "blue"
inputs:
agent:
description: "Agent identifier (e.g., docker/code-analyzer from Docker Hub, or path to .yaml file)"
required: true
prompt:
description: "Prompt to pass to the agent. If not provided, uses a default based on the agent type"
required: false
cagent-version:
description: "Version of cagent to use"
required: false
default: "v1.9.12"
mcp-gateway:
description: "Install mcp-gateway (true/false)"
required: false
default: "false"
mcp-gateway-version:
description: "Version of mcp-gateway to use (specifying this will enable mcp-gateway installation)"
required: false
default: "v0.22.0"
anthropic-api-key:
description: "Anthropic API key (defaults to ANTHROPIC_API_KEY secret)"
required: false
openai-api-key:
description: "OpenAI API key (defaults to OPENAI_API_KEY secret)"
required: false
google-api-key:
description: "Google API key for Gemini (defaults to GOOGLE_API_KEY secret)"
required: false
github-token:
description: "GitHub token for API access (defaults to GITHUB_TOKEN env var)"
required: false
timeout:
description: "Timeout in seconds for agent execution (0 for no timeout)"
required: false
default: "0"
debug:
description: "Enable debug mode with verbose logging (true/false)"
required: false
default: "false"
working-directory:
description: "Working directory to run the agent in"
required: false
default: "."
yolo:
description: "Enable yolo mode - auto-approve all prompts (true/false)"
required: false
default: "true"
extra-args:
description: "Additional arguments to pass to cagent exec"
required: false
default: ""
outputs:
exit-code:
description: "Exit code from cagent exec"
value: ${{ steps.run-agent.outputs.exit-code }}
output-file:
description: "Path to the output log file"
value: ${{ steps.run-agent.outputs.output-file }}
cagent-version:
description: "Version of cagent that was used"
value: ${{ steps.setup-binaries.outputs.cagent-version }}
mcp-gateway-installed:
description: "Whether mcp-gateway was installed (true/false)"
value: ${{ steps.setup-binaries.outputs.mcp-installed }}
execution-time:
description: "Agent execution time in seconds"
value: ${{ steps.run-agent.outputs.execution-time }}
security-blocked:
description: "Whether execution was blocked due to security concerns"
value: ${{ steps.sanitize-input.outputs.blocked == 'true' || steps.sanitize-output.outputs.leaked == 'true' }}
secrets-detected:
description: "Whether secrets were detected in output"
value: ${{ steps.sanitize-output.outputs.leaked }}
prompt-suspicious:
description: "Whether suspicious patterns were detected in user prompt"
value: ${{ steps.sanitize-prompt.outputs.suspicious }}
input-risk-level:
description: "Risk level of input (low/medium/high)"
value: ${{ steps.sanitize-input.outputs.risk-level }}
runs:
using: "composite"
steps:
- name: Validate inputs
id: validate-inputs
shell: bash
env:
AGENT: ${{ inputs.agent }}
CAGENT_VERSION: ${{ inputs.cagent-version }}
MCP_GATEWAY: ${{ inputs.mcp-gateway }}
MCP_GATEWAY_VERSION: ${{ inputs.mcp-gateway-version }}
DEBUG: ${{ inputs.debug }}
YOLO: ${{ inputs.yolo }}
EXTRA_ARGS: ${{ inputs.extra-args }}
run: |
# Validate agent is provided
if [[ -z "$AGENT" ]]; then
echo "::error::'agent' input is required"
exit 1
fi
# Validate cagent version format
if ! [[ "$CAGENT_VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+.*$ ]]; then
echo "::error::Invalid cagent version format '$CAGENT_VERSION'. Expected format: v1.2.3"
exit 1
fi
# Validate mcp-gateway version format if it will be installed
if [[ "$MCP_GATEWAY" == "true" ]]; then
if ! [[ "$MCP_GATEWAY_VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+.*$ ]]; then
echo "::error::Invalid mcp-gateway version format '$MCP_GATEWAY_VERSION'. Expected format: v1.2.3"
exit 1
fi
fi
if [[ "$DEBUG" == "true" ]]; then
echo "::debug::Validation passed"
echo "::debug::agent: $AGENT"
echo "::debug::cagent version: $CAGENT_VERSION"
echo "::debug::mcp-gateway version: $MCP_GATEWAY_VERSION"
echo "::debug::mcp-gateway install: $MCP_GATEWAY"
fi
# ========================================
# SECURITY: Sanitize and Analyze Input
# ========================================
- name: Sanitize and analyze input
if: inputs.prompt != ''
id: sanitize-input
shell: bash
env:
PROMPT_INPUT: ${{ inputs.prompt }}
ACTION_PATH: ${{ github.action_path }}
run: |
echo "🔍 Checking user-provided prompt for injection patterns..."
# Write prompt to temp file for analysis
printf '%s\n' "$PROMPT_INPUT" > /tmp/prompt-input.txt
# Run sanitization which outputs risk-level and blocked status
$ACTION_PATH/security/sanitize-input.sh /tmp/prompt-input.txt /tmp/prompt-clean.txt
- name: Cache cagent binary
id: cache-cagent
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ${{ github.workspace }}/cagent
key: cagent-${{ runner.os }}-${{ inputs.cagent-version }}
- name: Cache mcp-gateway binary
id: cache-mcp
if: ${{ inputs.mcp-gateway == 'true' }}
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: ~/.docker/cli-plugins/docker-mcp
key: mcp-gateway-${{ runner.os }}-${{ inputs.mcp-gateway-version }}
- name: Setup binaries
id: setup-binaries
shell: bash
env:
CAGENT_VERSION: ${{ inputs.cagent-version }}
MCP_GATEWAY: ${{ inputs.mcp-gateway }}
MCP_GATEWAY_VERSION: ${{ inputs.mcp-gateway-version }}
DEBUG: ${{ inputs.debug }}
YOLO: ${{ inputs.yolo }}
EXTRA_ARGS: ${{ inputs.extra-args }}
CAGENT_CACHE_HIT: ${{ steps.cache-cagent.outputs.cache-hit }}
MCP_CACHE_HIT: ${{ steps.cache-mcp.outputs.cache-hit }}
run: |
set -e
MCP_INSTALLED="false"
if [[ "$DEBUG" == "true" ]]; then
set -x
fi
# Function to retry downloads
retry_download() {
local url=$1
local output=$2
local max_attempts=3
local attempt=1
while [ $attempt -le $max_attempts ]; do
echo "Attempt $attempt of $max_attempts: Downloading $url"
if curl -fL -o "$output" "$url"; then
echo "Download successful"
return 0
fi
echo "Download failed, retrying..."
attempt=$((attempt + 1))
sleep 2
done
echo "::error::Failed to download after $max_attempts attempts: $url"
return 1
}
# Download cagent if not cached
if [[ "$CAGENT_CACHE_HIT" != "true" ]]; then
echo "Downloading cagent $CAGENT_VERSION..."
retry_download \
"https://github.com/docker/cagent/releases/download/$CAGENT_VERSION/cagent-linux-amd64" \
"$GITHUB_WORKSPACE/cagent"
chmod +x "$GITHUB_WORKSPACE/cagent"
else
echo "Using cached cagent binary"
fi
# Verify cagent works
if ! "$GITHUB_WORKSPACE/cagent" version; then
echo "::error::cagent binary verification failed"
exit 1
fi
# Download mcp-gateway if needed and not cached
if [[ "$MCP_GATEWAY" == "true" ]]; then
if [[ "$MCP_CACHE_HIT" != "true" ]]; then
echo "Downloading mcp-gateway $MCP_GATEWAY_VERSION..."
retry_download \
"https://github.com/docker/mcp-gateway/releases/download/$MCP_GATEWAY_VERSION/docker-mcp-linux-amd64.tar.gz" \
"mcp-gateway.tar.gz"
tar -xzf mcp-gateway.tar.gz
chmod +x docker-mcp
mkdir -p ~/.docker/cli-plugins
cp docker-mcp ~/.docker/cli-plugins/docker-mcp
else
echo "Using cached mcp-gateway binary"
fi
# Verify mcp-gateway works
if ! docker mcp version; then
echo "::error::mcp-gateway binary verification failed"
exit 1
fi
MCP_INSTALLED="true"
fi
# Set outputs
echo "cagent-version=$CAGENT_VERSION" >> $GITHUB_OUTPUT
echo "mcp-installed=$MCP_INSTALLED" >> $GITHUB_OUTPUT
- name: Run CAgent
id: run-agent
shell: bash
env:
ANTHROPIC_API_KEY: ${{ inputs.anthropic-api-key || env.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ inputs.openai-api-key || env.OPENAI_API_KEY }}
GOOGLE_API_KEY: ${{ inputs.google-api-key || env.GOOGLE_API_KEY }}
GITHUB_PERSONAL_ACCESS_TOKEN: ${{ inputs.github-token || github.token }}
AGENT: ${{ inputs.agent }}
PROMPT_INPUT: ${{ inputs.prompt }}
ACTION_PATH: ${{ github.action_path }}
DEBUG: ${{ inputs.debug }}
YOLO: ${{ inputs.yolo }}
EXTRA_ARGS: ${{ inputs.extra-args }}
TIMEOUT: ${{ inputs.timeout }}
WORKING_DIR: ${{ inputs.working-directory }}
CAGENT_VERSION: ${{ inputs.cagent-version }}
MCP_INSTALLED: ${{ steps.setup-binaries.outputs.mcp-installed }}
run: |
set -e
# Change to working directory
cd "$WORKING_DIR"
if [[ "$DEBUG" == "true" ]]; then
set -x
echo "::debug::Working directory: $(pwd)"
echo "::debug::GitHub workspace: $GITHUB_WORKSPACE"
fi
# Create output file early (before any validation exits)
# This ensures downstream steps always have a valid output file reference
OUTPUT_FILE=$(mktemp /tmp/cagent-output.XXXXXX.log)
echo "output-file=$OUTPUT_FILE" >> $GITHUB_OUTPUT
echo "Output file: $OUTPUT_FILE"
# Build command arguments array (SECURE: no eval!)
ARGS=("exec")
# Add flags
if [ "$YOLO" = "true" ]; then
ARGS+=("--yolo")
fi
# Add extra args if provided
# Note: This uses simple word splitting. Quoted arguments with spaces are not supported.
# Using eval would be a security risk with user-provided input.
if [ -n "$EXTRA_ARGS" ]; then
read -ra EXTRA_ARGS_ARRAY <<< "$EXTRA_ARGS"
ARGS+=("${EXTRA_ARGS_ARRAY[@]}")
fi
# Add agent
echo "Using agent: $AGENT"
ARGS+=("$AGENT")
# Add prompt if provided (pass via stdin to avoid "Argument list too long" errors)
if [ -n "$PROMPT_INPUT" ]; then
ARGS+=("-")
echo "Running cagent with ${#ARGS[@]} arguments (prompt via stdin)"
else
echo "Running cagent with ${#ARGS[@]} arguments (no prompt)"
fi
# Track execution time
START_TIME=$(date +%s)
# SECURE: Direct execution with quoted arguments (no eval!)
set +e # Don't exit on command failure
if [ "$TIMEOUT" != "0" ]; then
if [ -n "$PROMPT_INPUT" ]; then
printf '%s\n' "$PROMPT_INPUT" | timeout "$TIMEOUT" "$GITHUB_WORKSPACE/cagent" "${ARGS[@]}" 2>&1 | tee "$OUTPUT_FILE"
else
timeout "$TIMEOUT" "$GITHUB_WORKSPACE/cagent" "${ARGS[@]}" 2>&1 | tee "$OUTPUT_FILE"
fi
EXIT_CODE=$?
if [ $EXIT_CODE -eq 124 ]; then
echo "::error::Agent execution timed out after $TIMEOUT seconds"
fi
else
if [ -n "$PROMPT_INPUT" ]; then
printf '%s\n' "$PROMPT_INPUT" | "$GITHUB_WORKSPACE/cagent" "${ARGS[@]}" 2>&1 | tee "$OUTPUT_FILE"
EXIT_CODE=${PIPESTATUS[0]}
else
"$GITHUB_WORKSPACE/cagent" "${ARGS[@]}" 2>&1 | tee "$OUTPUT_FILE"
EXIT_CODE=$?
fi
fi
set -e
END_TIME=$(date +%s)
EXECUTION_TIME=$((END_TIME - START_TIME))
# Set outputs (output-file already set at start of step)
echo "exit-code=$EXIT_CODE" >> $GITHUB_OUTPUT
echo "execution-time=$EXECUTION_TIME" >> $GITHUB_OUTPUT
# Create job summary
{
echo "## CAgent Execution Summary"
echo ""
echo "| Property | Value |"
echo "|----------|-------|"
echo "| Agent | \`$AGENT\` |"
echo "| Exit Code | $EXIT_CODE |"
echo "| Execution Time | ${EXECUTION_TIME}s |"
echo "| CAgent Version | $CAGENT_VERSION |"
echo "| MCP Gateway | $MCP_INSTALLED |"
if [ "$TIMEOUT" != "0" ]; then
echo "| Timeout | ${TIMEOUT}s |"
fi
echo ""
if [ $EXIT_CODE -eq 0 ]; then
echo "✅ **Status:** Success"
elif [ $EXIT_CODE -eq 124 ]; then
echo "⏱️ **Status:** Timeout"
else
echo "❌ **Status:** Failed"
fi
} >> $GITHUB_STEP_SUMMARY
if [[ "$DEBUG" == "true" ]]; then
echo "::debug::Exit code: $EXIT_CODE"
echo "::debug::Execution time: ${EXECUTION_TIME}s"
echo "::debug::Output file: $OUTPUT_FILE"
fi
exit $EXIT_CODE
# ========================================
# SECURITY: Sanitize Output (UNIVERSAL - All Modes)
# ========================================
- name: Sanitize output
if: always()
id: sanitize-output
shell: bash
env:
OUTPUT_FILE: ${{ steps.run-agent.outputs.output-file }}
ACTION_PATH: ${{ github.action_path }}
run: |
echo "🔍 Scanning AI response for leaked secrets..."
OUTPUT_FILE="$OUTPUT_FILE"
# Defensive check: ensure output file exists
if [ -z "$OUTPUT_FILE" ] || [ ! -f "$OUTPUT_FILE" ]; then
echo "⚠️ No output file to scan (agent may have failed during validation)"
echo "leaked=false" >> $GITHUB_OUTPUT
exit 0
fi
$ACTION_PATH/security/sanitize-output.sh "$OUTPUT_FILE"
# Extract clean output (remove verbose cagent logging, timestamps, agent markers)
echo "🧹 Extracting clean agent output..."
# Primary method: Extract from cagent-output code block (most reliable)
if grep -q '^```cagent-output' "$OUTPUT_FILE"; then
# Extract everything between ```cagent-output and the closing ```
# Using awk to handle multi-line content properly
awk '/^```cagent-output$/,/^```$/ {
if (!/^```cagent-output$/ && !/^```$/) print
}' "$OUTPUT_FILE" > "${OUTPUT_FILE}.clean"
echo "✅ Extracted clean output from cagent-output code block"
# Fallback: Extract after agent marker
elif grep -q "^--- Agent: root ---$" "$OUTPUT_FILE"; then
AGENT_LINE=$(grep -n "^--- Agent: root ---$" "$OUTPUT_FILE" | tail -1 | cut -d: -f1)
# Validate AGENT_LINE is not empty before using it in arithmetic expansion
if [ -n "$AGENT_LINE" ]; then
tail -n +$((AGENT_LINE + 1)) "$OUTPUT_FILE" | \
grep -v "^time=" | \
grep -v "^level=" | \
grep -v "For any feedback" | \
sed '/^$/N;/^\n$/d' > "${OUTPUT_FILE}.clean"
echo "⚠️ No cagent-output block found, extracted after agent marker"
else
echo "⚠️ Failed to extract line number from agent marker"
cp "$OUTPUT_FILE" "${OUTPUT_FILE}.clean"
fi
# Fallback: Just clean metadata
else
grep -v "^time=" "$OUTPUT_FILE" | \
grep -v "^level=" | \
grep -v "For any feedback" > "${OUTPUT_FILE}.clean"
echo "⚠️ No extraction markers found - cleaned metadata only"
fi
# Use the cleaned output
mv "${OUTPUT_FILE}.clean" "$OUTPUT_FILE"
- name: Update job summary with cleaned output
if: always()
shell: bash
env:
OUTPUT_FILE: ${{ steps.run-agent.outputs.output-file }}
run: |
OUTPUT_FILE="$OUTPUT_FILE"
# Check if output file exists (may not exist if agent failed during validation)
if [ -z "$OUTPUT_FILE" ] || [ ! -f "$OUTPUT_FILE" ]; then
echo "⚠️ Output file not available, skipping summary update"
exit 0
fi
# Append cleaned output to job summary
{
echo ""
echo "<hr />"
echo ""
echo "<h2>Agent Output</h2>"
echo ""
cat "$OUTPUT_FILE"
echo ""
} >> $GITHUB_STEP_SUMMARY
# ========================================
# SECURITY: Handle Security Incident
# ========================================
- name: Handle security incident
if: steps.sanitize-output.outputs.leaked == 'true'
shell: bash
env:
GH_TOKEN: ${{ inputs.github-token || github.token }}
REPOSITORY: ${{ github.repository }}
RUN_ID: ${{ github.run_id }}
run: |
cat <<'ERROR_MSG' >&2
═══════════════════════════════════════════════════════
🚨 SECURITY INCIDENT: SECRET LEAK DETECTED
═══════════════════════════════════════════════════════
A secret was detected in the AI agent response
Check the workflow logs for the leaked secret
IMMEDIATE ACTIONS REQUIRED:
1. Review workflow logs for the leaked secret
2. Investigate the prompt/input that triggered this
3. Review who triggered this workflow
4. ROTATE ALL SECRETS IMMEDIATELY
═══════════════════════════════════════════════════════
ERROR_MSG
# Create security incident issue
BODY="**CRITICAL SECURITY INCIDENT**
A secret was detected in the AI agent response for workflow run $RUN_ID
## Actions Taken
✓ Workflow failed with error
✓ Security incident issue created
## Required Actions
1. Review workflow logs: https://github.com/$REPOSITORY/actions
2. **ROTATE COMPROMISED SECRETS IMMEDIATELY**
- ANTHROPIC_API_KEY
- GITHUB_TOKEN
- OPENAI_API_KEY
- GOOGLE_API_KEY
- Any other exposed credentials
3. Investigate the workflow trigger and input prompt
4. Review workflow run history for suspicious patterns
## Timeline
- Incident detected: $(date -u +%Y-%m-%dT%H:%M:%SZ)
- Workflow run: https://github.com/$REPOSITORY/actions/runs/$RUN_ID
## Next Steps
- [ ] Secrets rotated
- [ ] Logs reviewed
- [ ] Incident investigated
- [ ] Incident report filed
- [ ] Post-mortem completed"
gh issue create \
--repo "$REPOSITORY" \
--title "🚨 Security Alert: Secret Leak Detected in Agent Execution" \
--label "security" \
--body "$BODY"
echo "🚨 Security incident issue created"
exit 1
Action ID: marketplace/julia-actions/julia-version
Author: Curtis Vogt
Publisher: julia-actions
Repository: github.com/julia-actions/julia-version
Resolve version specifiers into Julia versions
| Name | Required | Description |
|---|---|---|
versions |
Required | The Julia version specifier or list of specifiers to resolve. Support formats include a scalar, JSON list, or a YAML list. When passing in a scalar prefer using a string instead of a numeric value to avoid unwanted YAML decimal conversions (e.g. `1.10` will be interpreted as `1.1`). |
project |
Optional | The path to the Julia project directory or file to use when resolving some specifiers (e.g. `min`). Defaults to using the environmental variable `JULIA_PROJECT` if set or otherwise `.`. |
if-missing |
Optional | Determine the behavior if a version specifier cannot be resolved. Default: warn |
| Name | Description |
|---|---|
resolved-json |
The unique JSON list of resolved Julia versions. Any versions which could not be resolved will be excluded from this list. |
resolved |
A single resolved Julia version when the input `versions` contains is a single version specifier. Will be an empty string if version cannot be resolved. |
# See: https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions
---
name: julia-version
description: Resolve version specifiers into Julia versions
author: Curtis Vogt
branding:
icon: filter
color: purple
inputs:
versions:
description: >-
The Julia version specifier or list of specifiers to resolve. Support
formats include a scalar, JSON list, or a YAML list. When passing in a
scalar prefer using a string instead of a numeric value to avoid unwanted
YAML decimal conversions (e.g. `1.10` will be interpreted as `1.1`).
required: true
project:
description: >-
The path to the Julia project directory or file to use when resolving some
specifiers (e.g. `min`). Defaults to using the environmental variable
`JULIA_PROJECT` if set or otherwise `.`.
required: false
default: "" # Empty string is a special value which falls back to using JULIA_PROJECT if defined, otherwise "."
if-missing:
description: >-
Determine the behavior if a version specifier cannot be resolved.
required: false
default: warn
outputs:
resolved-json:
description: >-
The unique JSON list of resolved Julia versions. Any versions which could
not be resolved will be excluded from this list.
resolved:
description: >-
A single resolved Julia version when the input `versions` contains is a
single version specifier. Will be an empty string if version cannot be
resolved.
runs:
using: node20
main: dist/index.js
Action ID: marketplace/andstor/jsdoc-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/jsdoc-action
GitHub Action to build JSDoc documentation
| Name | Required | Description |
|---|---|---|
source_dir |
Optional | Source directory to build documentation from |
output_dir |
Optional | Output folder for the generated documentation |
recurse |
Optional | Recurse into subdirectories when scanning for source files |
config_file |
Optional | The path to a JSDoc configuration file |
template |
Optional | The JSDoc template to install |
template_name |
Optional | This input variable is deprecated in favour of "template" |
template_dir |
Optional | The relative location of the template files within the template package |
front_page |
Optional | The path to a Markdown file to be used as a the front page |
name: 'JSDoc Action'
description: 'GitHub Action to build JSDoc documentation'
author: 'André Storhaug'
branding:
icon: 'file-text'
color: 'yellow'
inputs:
source_dir:
description: 'Source directory to build documentation from'
required: false
output_dir:
description: 'Output folder for the generated documentation'
required: false
recurse:
description: 'Recurse into subdirectories when scanning for source files'
required: false
config_file:
description: 'The path to a JSDoc configuration file'
required: false
template:
description: 'The JSDoc template to install'
required: false
template_name:
description: 'This input variable is deprecated in favour of "template"'
required: false
template_dir:
description: 'The relative location of the template files within the template package'
required: false
front_page:
description: 'The path to a Markdown file to be used as a the front page'
required: false
runs:
using: 'node12'
main: 'src/index.js'
Action ID: marketplace/github/setup-licensed
Author: Unknown
Publisher: github
Repository: github.com/github/setup-licensed
Setup github/licensed for use in GitHub Actions workflows
| Name | Required | Description |
|---|---|---|
github_token |
Optional | Authentication token to use with the GitHub API |
version |
Required | The github/licensed version to install |
install-dir |
Optional | The target install directory for the github/licensed executable Default: /usr/local/bin |
name: 'Setup github/licensed'
description: 'Setup github/licensed for use in GitHub Actions workflows'
branding:
icon: check
color: green
inputs:
github_token:
description: 'Authentication token to use with the GitHub API'
required: false
version:
description: 'The github/licensed version to install'
required: true
install-dir:
description: 'The target install directory for the github/licensed executable'
default: '/usr/local/bin'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/atlassian/gajira-login
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-login
Log in to Jira Cloud instance
name: 'Jira Login'
description: 'Log in to Jira Cloud instance'
branding:
icon: 'log-in'
color: 'blue'
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/google-github-actions/test-action
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/test-action
This action is for testing only.
# Copyright 2024 The Authors (see AUTHORS file)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Google Actions Testing'
author: 'Google LLC'
description: 'This action is for testing only.'
runs:
using: 'node20'
main: 'index.js'
Action ID: marketplace/peter-evans/approve-pull-request-action
Author: juliangruber
Publisher: peter-evans
Repository: github.com/peter-evans/approve-pull-request-action
A GitHub Action for approving pull requests
| Name | Required | Description |
|---|---|---|
github-token |
Required | GitHub Token |
number |
Required | Pull Request number |
repo |
Optional | Pull Request repo in owner/repo format |
name: Approve Pull Request
author: juliangruber
description: 'A GitHub Action for approving pull requests'
branding:
icon: 'git-pull-request'
color: green
inputs:
github-token:
description: 'GitHub Token'
required: true
number:
description: 'Pull Request number'
required: true
repo:
description: 'Pull Request repo in owner/repo format'
required: false
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/ammaraskar/msvc-problem-matcher
Author: Ammar Askar
Publisher: ammaraskar
Repository: github.com/ammaraskar/msvc-problem-matcher
Attaches a problem matcher that looks for errors during MSVC builds
name: MSVC Problem Matcher
description: Attaches a problem matcher that looks for errors during MSVC builds
author: Ammar Askar
branding:
icon: check-square
color: blue
runs:
using: 'node20'
main: 'index.js'
Action ID: marketplace/azure/data-factory-validate-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/data-factory-validate-action
Validate all of the Azure Data Factory resources in the repository
| Name | Required | Description |
|---|---|---|
path |
Optional | Directory that contains all Data Factory resources Default: ./ |
id |
Optional | Data Factory resource ID Default: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup/providers/Microsoft.DataFactory/factories/dataFactory |
name: data-factory-validate
description: Validate all of the Azure Data Factory resources in the repository
inputs:
path:
description: 'Directory that contains all Data Factory resources'
required: false
default: ./
id:
description: 'Data Factory resource ID'
required: false
default: '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup/providers/Microsoft.DataFactory/factories/dataFactory'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.path }}
- ${{ inputs.id }}
Action ID: marketplace/github/evergreen
Author: github
Publisher: github
Repository: github.com/github/evergreen
A GitHub Action to request dependabot enablement on eligible repositories in an organization.
---
name: "Evergreen action"
author: "github"
description: "A GitHub Action to request dependabot enablement on eligible repositories in an organization."
runs:
using: "docker"
image: "docker://ghcr.io/github/evergreen:v1"
branding:
icon: "file-plus"
color: "green"
Action ID: marketplace/google-github-actions/analyze-code-security-scc
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/analyze-code-security-scc
Scan and analyze code for security risks using Google Cloud.
| Name | Required | Description |
|---|---|---|
organization_id |
Required | Google Cloud organization ID for the organization which includes the resources that you want to modify. For example, '1234'. |
scan_file_ref |
Required | Path to a file, relative to the local workspace, for the IaC file to scan. For example: ./tf_plan.json or ./artifacts/tf_plan.json |
iac_type |
Required | The IaC template type. Currently only Terraform is supported. Default: terraform |
scan_timeout |
Optional | The maximum time before the scanning stops. The value must be between "1m"
and `10m`. Default: 3m |
ignore_violations |
Optional | Whether violations found in IaC file should be ignored when determining the build status. This input doesn't apply to violations that are related to generating SARIF reports and determining the `iac_scan_result`. |
failure_criteria |
Optional | The failure criteria that determines the workflow build status. You can
set a threshold for the number of critical, high, medium, and low severity
issues and use an aggregator (either `and` or `or`) to evaluate the
criteria.
To determine whether a build has failed, the threshold for each severity
is evaluated against the count of issues with that severity in the IaC
scan results and then severity level evaluations are aggregated using
`AND` or `OR` to arrive at `failure_criteria` value. You must include an
aggregator in the string. The aggregator value is case-sensitive.
For example, if you set the failure criteria to `HIGH:1,LOW:1,OPERATOR:OR`,
the workflow fails if there is 1 or more HIGH severity findings or 1 or
more LOW severity findings. If you set the failure criteria to
`HIGH:1,LOW:1,OPERATOR:AND`, the workflow fails if there is 1 or more HIGH
severity findings and 1 or more LOW severity findings.
If the `failure_criteria` evaluates to `true`, the workflow is marked as
`FAILED`. Otherwise, the workflow is marked as `SUCCESS`. Default: Critical:1, High:1, Medium:1, Low:1, Operator:OR |
fail_silently |
Optional | If set to true, the workflow will not fail in case of any internal error including invalid credentials and plugin dependency failure. Note: This GitHub Action will always fail in case of any input validation errors. |
| Name | Description |
|---|---|
iac_scan_result |
The result of the security scan. One of: - `passed`: No violations were found or the `failure_criteria` was not satisfied. - `failed`: The `failure_criteria` was satisfied. - `error`: The action ran into an execution error, generally due to a misconfiguration or invalid credentials. |
iac_scan_result_sarif_path |
Path for the SARIF report file. This file is only available when violations are found in the scan file. |
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Analyze Code Security'
author: 'Google LLC'
description: |-
Scan and analyze code for security risks using Google Cloud.
inputs:
organization_id:
description: |-
Google Cloud organization ID for the organization which includes the
resources that you want to modify. For example, '1234'.
required: true
scan_file_ref:
description: |-
Path to a file, relative to the local workspace, for the IaC file to scan.
For example:
./tf_plan.json
or
./artifacts/tf_plan.json
required: true
iac_type:
description: |-
The IaC template type. Currently only Terraform is supported.
default: 'terraform'
required: true
scan_timeout:
description: |-
The maximum time before the scanning stops. The value must be between "1m"
and `10m`.
default: '3m'
required: false
ignore_violations:
description: |-
Whether violations found in IaC file should be ignored when determining
the build status. This input doesn't apply to violations that are related
to generating SARIF reports and determining the `iac_scan_result`.
default: false
required: false
failure_criteria:
description: |-
The failure criteria that determines the workflow build status. You can
set a threshold for the number of critical, high, medium, and low severity
issues and use an aggregator (either `and` or `or`) to evaluate the
criteria.
To determine whether a build has failed, the threshold for each severity
is evaluated against the count of issues with that severity in the IaC
scan results and then severity level evaluations are aggregated using
`AND` or `OR` to arrive at `failure_criteria` value. You must include an
aggregator in the string. The aggregator value is case-sensitive.
For example, if you set the failure criteria to `HIGH:1,LOW:1,OPERATOR:OR`,
the workflow fails if there is 1 or more HIGH severity findings or 1 or
more LOW severity findings. If you set the failure criteria to
`HIGH:1,LOW:1,OPERATOR:AND`, the workflow fails if there is 1 or more HIGH
severity findings and 1 or more LOW severity findings.
If the `failure_criteria` evaluates to `true`, the workflow is marked as
`FAILED`. Otherwise, the workflow is marked as `SUCCESS`.
default: 'Critical:1, High:1, Medium:1, Low:1, Operator:OR'
required: false
fail_silently:
description: |-
If set to true, the workflow will not fail in case of any internal error
including invalid credentials and plugin dependency failure.
Note: This GitHub Action will always fail in case of any input validation
errors.
default: false
required: false
outputs:
iac_scan_result:
description: |-
The result of the security scan. One of:
- `passed`: No violations were found or the `failure_criteria` was not
satisfied.
- `failed`: The `failure_criteria` was satisfied.
- `error`: The action ran into an execution error, generally due to a
misconfiguration or invalid credentials.
iac_scan_result_sarif_path:
description: |-
Path for the SARIF report file. This file is only available when
violations are found in the scan file.
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/azure/k8s-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/k8s-deploy
Deploy to a Kubernetes cluster including, but not limited to Azure Kubernetes Service (AKS) clusters
| Name | Required | Description |
|---|---|---|
namespace |
Optional | Choose the target Kubernetes namespace. If the namespace is not provided, the commands will automatically use the namespace defined in the manifest files first or otherwise run in the default namespace. |
manifests |
Required | Path to the manifest files which will be used for deployment. |
images |
Optional | Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files Example: contosodemo.azurecr.io/helloworld:test |
imagepullsecrets |
Optional | Name of a docker-registry secret that has already been set up within the cluster. Each of these secret names are added under imagePullSecrets field for the workloads found in the input manifest files |
pull-images |
Optional | Switch whether to pull the images from the registry before deployment to find out Dockerfile's path in order to add it to the annotations Default: True |
strategy |
Required | Deployment strategy to be used. Allowed values are basic, canary and blue-green Default: basic |
route-method |
Optional | Route based on service, ingress or SMI for blue-green strategy Default: service |
version-switch-buffer |
Optional | Indicates the buffer time in minutes before the switch is made to the green version (max is 300 min ie. 5hrs) |
traffic-split-method |
Optional | Traffic split method to be used. Allowed values are pod and smi Default: pod |
traffic-split-annotations |
Optional | Annotations in the form of key/value pair to be added to TrafficSplit. Relevant only if deployement strategy is blue-green or canary |
baseline-and-canary-replicas |
Optional | Baseline and canary replicas count. Valid value between 0 to 100 (inclusive) |
percentage |
Optional | Percentage of traffic redirect to canary deployment |
action |
Required | deploy, promote, or reject Default: deploy |
force |
Optional | Deploy when a previous deployment already exists. If true then --force argument is added to the apply command |
server-side |
Optional | The apply command runs in the server instead of the client. If true then --server-side argument is added to the apply command. |
timeout |
Optional | Timeout for the rollout status Default: 10m |
token |
Required | Github token Default: ${{ github.token }} |
annotate-resources |
Optional | Annotate the resources. If set to false all annotations are skipped completely. Default: True |
annotate-namespace |
Optional | Annotate the target namespace. Ignored when annotate-resources is set to false or no namespace is provided. Default: True |
private-cluster |
Optional | True if cluster is AKS private cluster |
resource-group |
Optional | Name of resource group - Only required if using private cluster |
name |
Optional | Name of the private cluster - Only required if using private cluster |
skip-tls-verify |
Optional | True if the insecure-skip-tls-verify option should be used. Input should be 'true' or 'false'. |
resource-type |
Optional | Either Microsoft.ContainerService/managedClusters or Microsoft.ContainerService/fleets'. Default: Microsoft.ContainerService/managedClusters |
name: 'Deploy to Kubernetes cluster'
description: 'Deploy to a Kubernetes cluster including, but not limited to Azure Kubernetes Service (AKS) clusters'
inputs:
# Please ensure you have used either azure/k8s-actions/aks-set-context or azure/k8s-actions/k8s-set-context in the workflow before this action
# You also need to have kubectl installed (azure/setup-kubectl)
namespace:
description: 'Choose the target Kubernetes namespace. If the namespace is not provided, the commands will automatically use the namespace defined in the manifest files first or otherwise run in the default namespace.'
required: false
default: ''
manifests:
description: 'Path to the manifest files which will be used for deployment.'
required: true
images:
description: 'Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files Example: contosodemo.azurecr.io/helloworld:test'
required: false
imagepullsecrets:
description: 'Name of a docker-registry secret that has already been set up within the cluster. Each of these secret names are added under imagePullSecrets field for the workloads found in the input manifest files'
required: false
pull-images:
description: "Switch whether to pull the images from the registry before deployment to find out Dockerfile's path in order to add it to the annotations"
required: false
default: true
strategy:
description: 'Deployment strategy to be used. Allowed values are basic, canary and blue-green'
required: true
default: 'basic'
route-method:
description: 'Route based on service, ingress or SMI for blue-green strategy'
required: false
default: 'service'
version-switch-buffer:
description: 'Indicates the buffer time in minutes before the switch is made to the green version (max is 300 min ie. 5hrs)'
required: false
default: 0
traffic-split-method:
description: 'Traffic split method to be used. Allowed values are pod and smi'
required: false
default: 'pod'
traffic-split-annotations:
description: 'Annotations in the form of key/value pair to be added to TrafficSplit. Relevant only if deployement strategy is blue-green or canary'
required: false
baseline-and-canary-replicas:
description: 'Baseline and canary replicas count. Valid value between 0 to 100 (inclusive)'
required: false
default: ''
percentage:
description: 'Percentage of traffic redirect to canary deployment'
required: false
default: 0
action:
description: 'deploy, promote, or reject'
required: true
default: 'deploy'
force:
description: 'Deploy when a previous deployment already exists. If true then --force argument is added to the apply command'
required: false
default: false
server-side:
description: 'The apply command runs in the server instead of the client. If true then --server-side argument is added to the apply command.'
required: false
default: false
timeout:
description: 'Timeout for the rollout status'
required: false
default: 10m
token:
description: 'Github token'
default: ${{ github.token }}
required: true
annotate-resources:
description: 'Annotate the resources. If set to false all annotations are skipped completely.'
required: false
default: true
annotate-namespace:
description: 'Annotate the target namespace. Ignored when annotate-resources is set to false or no namespace is provided.'
required: false
default: true
private-cluster:
description: 'True if cluster is AKS private cluster'
required: false
default: false
resource-group:
description: 'Name of resource group - Only required if using private cluster'
required: false
name:
description: 'Name of the private cluster - Only required if using private cluster'
required: false
skip-tls-verify:
description: True if the insecure-skip-tls-verify option should be used. Input should be 'true' or 'false'.
default: false
resource-type:
description: Either Microsoft.ContainerService/managedClusters or Microsoft.ContainerService/fleets'.
required: false
default: 'Microsoft.ContainerService/managedClusters'
branding:
color: 'green'
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/dflook/terraform-destroy-workspace
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-destroy-workspace
Delete a Terraform workspace, destroying all resources
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform root module directory. Default: . |
workspace |
Required | The name of the Terraform workspace to destroy and delete. |
variables |
Optional | Variables to set for the terraform destroy. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `destroy-failed` - The Terraform destroy operation failed. - `state-locked` - The Terraform state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
name: terraform-destroy-workspace
description: Delete a Terraform workspace, destroying all resources
author: Daniel Flook
inputs:
path:
description: The path to the Terraform root module directory.
required: false
default: "."
workspace:
description: The name of the Terraform workspace to destroy and delete.
required: true
variables:
description: |
Variables to set for the terraform destroy. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `destroy-failed` - The Terraform destroy operation failed.
- `state-locked` - The Terraform state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/destroy-workspace.sh
branding:
icon: globe
color: purple
Action ID: marketplace/actions/container-toolkit-action
Author: Your name or organization here
Publisher: actions
Repository: github.com/actions/container-toolkit-action
Provide a description here
| Name | Required | Description |
|---|---|---|
milliseconds |
Required | Your input description here Default: 1000 |
| Name | Description |
|---|---|
time |
Your output description here |
name: The name of your action here
description: Provide a description here
author: Your name or organization here
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: heart
color: red
# Define your inputs here.
inputs:
milliseconds:
description: Your input description here
required: true
default: '1000'
# Define your outputs here.
outputs:
time:
description: Your output description here
runs:
using: docker
image: Dockerfile
env:
INPUT_MILLISECONDS: ${{ inputs.milliseconds }}
Action ID: marketplace/mheap/debug-artifact
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/debug-artifact
Uploads debug logs as an artifact
name: Debug Artifacts
description: Uploads debug logs as an artifact
runs:
using: docker
image: Dockerfile
Action ID: marketplace/dflook/tofu-new-workspace
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-new-workspace
Creates a new OpenTofu workspace. If the workspace already exists, succeeds without doing anything.
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu root module directory. Default: . |
workspace |
Required | The name of the OpenTofu workspace to create. |
variables |
Optional | Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
name: tofu-new-workspace
description: Creates a new OpenTofu workspace. If the workspace already exists, succeeds without doing anything.
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu root module directory.
required: false
default: "."
workspace:
description: The name of the OpenTofu workspace to create.
required: true
variables:
description: |
Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/new-workspace.sh
branding:
icon: globe
color: purple
Action ID: marketplace/github/combine-prs
Author: Grant Birkinbine
Publisher: github
Repository: github.com/github/combine-prs
Combine multiple PRs into a single PR
| Name | Required | Description |
|---|---|---|
github_token |
Required | The GitHub token used to create an authenticated client - Provided for you by default! Default: ${{ github.token }} |
branch_prefix |
Required | The prefix of the branches to combine Default: dependabot |
pr_title |
Required | The title of the pull request to create Default: Combined PRs |
pr_body_header |
Required | The header of the pull request body Default: # Combined PRs ➡️📦⬅️ |
min_combine_number |
Required | The minimum number of PRs that have to match criteria in order to create a combined PR Default: 2 |
branch_regex |
Optional | The regex to match the branches to combine - more control than branch_prefix |
ci_required |
Required | Whether or not CI should be passing to combine the PR Default: true |
review_required |
Optional | Whether or not a review should be required to combine the PR Default: false |
combine_branch_name |
Required | The name of the branch to combine the PRs into Default: combined-prs-branch |
ignore_label |
Required | The label to ignore when combining PRs Default: nocombine |
select_label |
Optional | The label marking PRs that should be combined |
labels |
Optional | A comma seperated list of labels to add to the combined PR |
assignees |
Optional | A comma seperated list of assignees to add to the combined PR |
autoclose |
Optional | Whether or not to close combined PRs if the combined PR is merged Default: true |
update_branch |
Optional | Whether or not to update the combined branch with the latest changes from the base branch after creating the combined pull request Default: true |
create_from_scratch |
Optional | Whether or not to start from a clean base branch when (re)creating the combined PR Default: false |
| Name | Description |
|---|---|
pr_url |
The pull request URL if a PR was created |
pr_number |
The pull request number if a PR was created |
name: "combine-prs"
description: "Combine multiple PRs into a single PR"
author: "Grant Birkinbine"
branding:
icon: 'git-branch'
color: 'gray-dark'
inputs:
github_token:
description: The GitHub token used to create an authenticated client - Provided for you by default!
default: ${{ github.token }}
required: true
branch_prefix:
description: The prefix of the branches to combine
required: true
default: dependabot
pr_title:
description: The title of the pull request to create
required: true
default: "Combined PRs"
pr_body_header:
description: The header of the pull request body
required: true
default: "# Combined PRs ➡️📦⬅️"
min_combine_number:
description: The minimum number of PRs that have to match criteria in order to create a combined PR
required: true
default: "2"
branch_regex:
description: The regex to match the branches to combine - more control than branch_prefix
required: false
default: ""
ci_required:
description: Whether or not CI should be passing to combine the PR
required: true
default: "true"
review_required:
description: Whether or not a review should be required to combine the PR
required: false
default: "false"
combine_branch_name:
description: The name of the branch to combine the PRs into
required: true
default: "combined-prs-branch"
ignore_label:
description: The label to ignore when combining PRs
required: true
default: "nocombine"
select_label:
description: The label marking PRs that should be combined
required: false
default: ""
labels:
description: A comma seperated list of labels to add to the combined PR
required: false
default: ""
assignees:
description: A comma seperated list of assignees to add to the combined PR
required: false
default: ""
autoclose:
description: Whether or not to close combined PRs if the combined PR is merged
required: false
default: "true"
update_branch:
description: Whether or not to update the combined branch with the latest changes from the base branch after creating the combined pull request
default: "true"
required: false
create_from_scratch:
description: Whether or not to start from a clean base branch when (re)creating the combined PR
default: "false"
required: false
outputs:
pr_url:
description: The pull request URL if a PR was created
pr_number:
description: The pull request number if a PR was created
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/appleboy/gtalk-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/gtalk-action
Sending a Gtalk message
| Name | Required | Description |
|---|---|---|
username |
Required | gtalk user email |
oauth_token |
Required | AuthToken provides go-xmpp with the required OAuth2 token used to authenticate. |
message |
Optional | message - optional. |
to |
Optional | send message to user |
name: 'Gtalk Message Notify'
description: 'Sending a Gtalk message'
author: 'Bo-Yi Wu'
inputs:
username:
description: 'gtalk user email'
required: true
oauth_token:
description: 'AuthToken provides go-xmpp with the required OAuth2 token used to authenticate.'
required: true
message:
description: 'message - optional.'
to:
description: 'send message to user'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'message-square'
color: 'gray-dark'
Action ID: marketplace/dflook/configure-oidc-aws-credentials
Author: Unknown
Publisher: dflook
Repository: github.com/dflook/configure-oidc-aws-credentials
Fetch temporary AWS session credentials using OIDC
| Name | Required | Description |
|---|---|---|
role-arn |
Required | ARN of the role to assume |
sts-region |
Optional | The region to use the STS regional endpoint in. If not set, will use the global endpoint. |
audience |
Optional | The OIDC audience to use. The default is set by the GitHub OIDC provider if not set here. |
role-session-name |
Optional | A name to give the role session Default: GitHubActions |
duration-seconds |
Optional | Duration of the role session Default: 3600 |
export-variables |
Optional | Set to true to export AWS environment variables Default: false |
| Name | Description |
|---|---|
aws-access-key-id |
The access key id for the temporary credentials |
aws-secret-access-key |
The secret access key for the temporary credentials |
aws-session-token |
The session token for the temporary credentials |
expiration |
The date on which the credentials expire |
name: configure-oidc-aws-credentials
description: 'Fetch temporary AWS session credentials using OIDC'
inputs:
role-arn:
description: 'ARN of the role to assume'
required: true
sts-region:
description: 'The region to use the STS regional endpoint in. If not set, will use the global endpoint.'
required: false
audience:
description: 'The OIDC audience to use. The default is set by the GitHub OIDC provider if not set here.'
required: false
role-session-name:
description: 'A name to give the role session'
required: false
default: 'GitHubActions'
duration-seconds:
description: 'Duration of the role session'
required: false
default: '3600'
export-variables:
description: 'Set to true to export AWS environment variables'
required: false
default: 'false'
outputs:
aws-access-key-id:
description: 'The access key id for the temporary credentials'
aws-secret-access-key:
description: 'The secret access key for the temporary credentials'
aws-session-token:
description: 'The session token for the temporary credentials'
expiration:
description: 'The date on which the credentials expire'
runs:
using: 'node16'
main: 'dist/index.js'
branding:
icon: cloud
color: orange
Action ID: marketplace/peter-evans/install-llvm-action
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/install-llvm-action
Downloads and installs LLVM and Clang binaries.
| Name | Required | Description |
|---|---|---|
version |
Required | The version of LLVM and Clang binaries to install. |
force-version |
Optional | Whether to accept unsupported LLVM and Clang versions. |
ubuntu-version |
Optional | The version override of Ubuntu to use for the Linux platform. |
directory |
Optional | The directory to install LLVM and Clang binaries to. |
cached |
Optional | Whether the LLVM and Clang binaries were cached. |
download-url |
Optional | The URL to download LLVM and Clang binaries from. |
auth |
Optional | The Authorization header to use when downloading LLVM and Clang binaries. |
env |
Optional | Whether to set CC and CXX environment variables to Clang paths. |
| Name | Description |
|---|---|
version |
The full version of LLVM and Clang binaries installed. |
name: "Install LLVM and Clang"
description: "Downloads and installs LLVM and Clang binaries."
branding:
icon: "arrow-down-circle"
color: "black"
inputs:
version:
description: "The version of LLVM and Clang binaries to install."
required: true
force-version:
description: "Whether to accept unsupported LLVM and Clang versions."
required: false
ubuntu-version:
description: "The version override of Ubuntu to use for the Linux platform."
required: false
directory:
description: "The directory to install LLVM and Clang binaries to."
required: false
cached:
description: "Whether the LLVM and Clang binaries were cached."
required: false
download-url:
description: "The URL to download LLVM and Clang binaries from."
required: false
auth:
description: "The Authorization header to use when downloading LLVM and Clang binaries."
required: false
env:
description: "Whether to set CC and CXX environment variables to Clang paths."
required: false
outputs:
version:
description: "The full version of LLVM and Clang binaries installed."
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/appleboy/discord-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/discord-action
Send a message to a Discord channel using a webhook. This action allows you to notify a Discord channel by sending a message through a webhook.
| Name | Required | Description |
|---|---|---|
webhook_url |
Optional | The URL of the Discord webhook. This is the endpoint where the message will be sent. |
webhook_id |
Optional | The ID of the Discord webhook. This is a unique identifier for the webhook. |
webhook_token |
Optional | The token of the Discord webhook. This token is used to authenticate the webhook. |
message |
Optional | The content of the message to send. The message can be up to 2000 characters long. |
file |
Optional | The content of the file to send. You can attach a file to the message. |
color |
Optional | The color code for the embed message. This sets the color of the embed's side border. |
wait |
Optional | Wait for server confirmation before responding. If set to true, the action will return the created message body. |
tts |
Optional | Set to true if the message should be read aloud using text-to-speech. This enables the text-to-speech feature for the message. |
username |
Optional | Override the default username of the webhook. This allows you to set a custom username for the message sender. |
avatar_url |
Optional | Override the default avatar of the webhook. This allows you to set a custom avatar for the message sender. |
debug |
Optional | Enable debug mode. If set to true, the action will output the request and response details. |
name: "Discord Message Notify"
description: "Send a message to a Discord channel using a webhook. This action allows you to notify a Discord channel by sending a message through a webhook."
author: "Bo-Yi Wu"
inputs:
webhook_url:
description: "The URL of the Discord webhook. This is the endpoint where the message will be sent."
webhook_id:
description: "The ID of the Discord webhook. This is a unique identifier for the webhook."
webhook_token:
description: "The token of the Discord webhook. This token is used to authenticate the webhook."
message:
description: "The content of the message to send. The message can be up to 2000 characters long."
file:
description: "The content of the file to send. You can attach a file to the message."
color:
description: "The color code for the embed message. This sets the color of the embed's side border."
wait:
description: "Wait for server confirmation before responding. If set to true, the action will return the created message body."
tts:
description: "Set to true if the message should be read aloud using text-to-speech. This enables the text-to-speech feature for the message."
username:
description: "Override the default username of the webhook. This allows you to set a custom username for the message sender."
avatar_url:
description: "Override the default avatar of the webhook. This allows you to set a custom avatar for the message sender."
debug:
description: "Enable debug mode. If set to true, the action will output the request and response details."
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "message-square"
color: "gray-dark"
Action ID: marketplace/amirisback/bundletool-action-apk
Author: Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/bundletool-action-apk
This action will help you convert your aab to apks file.
| Name | Required | Description |
|---|---|---|
aabFile |
Required | Path to your aab file |
base64Keystore |
Required | The key used to sign the apk encoded in base 64 |
keystorePassword |
Required | The keystore password |
keystoreAlias |
Required | The keystore alias |
keyPassword |
Required | The password to the key |
bundletoolVersion |
Optional | The version of bundletool to use Default: latest |
name: 'BundleTool Action APK'
description: 'This action will help you convert your aab to apks file.'
author: 'Faisal Amir'
branding:
icon: 'check-circle'
color: 'green'
inputs:
aabFile:
description: 'Path to your aab file'
required: true
base64Keystore:
description: 'The key used to sign the apk encoded in base 64'
required: true
keystorePassword:
description: 'The keystore password'
required: true
keystoreAlias:
description: 'The keystore alias'
required: true
keyPassword:
description: 'The password to the key'
required: true
bundletoolVersion:
description: 'The version of bundletool to use'
required: false
default: 'latest'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/stefanzweifel/laravel-phpinsights-action
Author: Stefan Zweifel <hello@stefanzweifel.io>
Publisher: stefanzweifel
Repository: github.com/stefanzweifel/laravel-phpinsights-action
Run PHP Insights
name: Laravel PHP Insights Action
description: Run PHP Insights
author: Stefan Zweifel <hello@stefanzweifel.io>
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: bar-chart-2
color: blue
Action ID: marketplace/peter-evans/create-github-app-token
Author: Gregor Martynus and Parker Brown
Publisher: peter-evans
Repository: github.com/peter-evans/create-github-app-token
GitHub Action for creating a GitHub App installation access token
| Name | Required | Description |
|---|---|---|
app-id |
Optional | GitHub App ID |
app_id |
Optional | GitHub App ID |
private-key |
Optional | GitHub App private key |
private_key |
Optional | GitHub App private key |
owner |
Optional | The owner of the GitHub App installation (defaults to current repository owner) |
repositories |
Optional | Repositories to install the GitHub App on (defaults to current repository if owner is unset) |
skip-token-revoke |
Optional | If truthy, the token will not be revoked when the current job is complete |
skip_token_revoke |
Optional | If truthy, the token will not be revoked when the current job is complete |
github-api-url |
Optional | The URL of the GitHub REST API. Default: ${{ github.api_url }} |
| Name | Description |
|---|---|
token |
GitHub installation access token |
installation-id |
GitHub App installation ID |
app-slug |
GitHub App slug |
name: "Create GitHub App Token"
description: "GitHub Action for creating a GitHub App installation access token"
author: "Gregor Martynus and Parker Brown"
branding:
icon: "lock"
color: "gray-dark"
inputs:
app-id:
description: "GitHub App ID"
required: false # TODO: When 'app_id' is removed, make 'app-id' required
app_id:
description: "GitHub App ID"
required: false
deprecationMessage: "'app_id' is deprecated and will be removed in a future version. Use 'app-id' instead."
private-key:
description: "GitHub App private key"
required: false # TODO: When 'private_key' is removed, make 'private-key' required
private_key:
description: "GitHub App private key"
required: false
deprecationMessage: "'private_key' is deprecated and will be removed in a future version. Use 'private-key' instead."
owner:
description: "The owner of the GitHub App installation (defaults to current repository owner)"
required: false
repositories:
description: "Repositories to install the GitHub App on (defaults to current repository if owner is unset)"
required: false
skip-token-revoke:
description: "If truthy, the token will not be revoked when the current job is complete"
required: false
skip_token_revoke:
description: "If truthy, the token will not be revoked when the current job is complete"
required: false
deprecationMessage: "'skip_token_revoke' is deprecated and will be removed in a future version. Use 'skip-token-revoke' instead."
# Make GitHub API configurable to support non-GitHub Cloud use cases
# see https://github.com/actions/create-github-app-token/issues/77
github-api-url:
description: The URL of the GitHub REST API.
default: ${{ github.api_url }}
outputs:
token:
description: "GitHub installation access token"
installation-id:
description: "GitHub App installation ID"
app-slug:
description: "GitHub App slug"
runs:
using: "node20"
main: "dist/main.cjs"
post: "dist/post.cjs"
Action ID: marketplace/ammaraskar/gcc-problem-matcher
Author: Ammar Askar
Publisher: ammaraskar
Repository: github.com/ammaraskar/gcc-problem-matcher
Attaches a problem matcher that looks for errors during gcc builds
name: GCC Problem Matcher
description: Attaches a problem matcher that looks for errors during gcc builds
author: Ammar Askar
branding:
icon: check-square
color: yellow
runs:
using: 'node20'
main: 'index.js'
Action ID: marketplace/amirisback/android-dynamic-feature
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-dynamic-feature
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/get-keyvault-secrets
Author: Unknown
Publisher: azure
Repository: github.com/azure/get-keyvault-secrets
Get Secrets from Azure Key Vault instance and set as output variables. github.com/azure/actions
| Name | Required | Description |
|---|---|---|
keyvault |
Required | Name of the azure key vault |
secrets |
Required | Name of the secret to be fetched |
name: 'Azure key vault - Get Secrets'
description: 'Get Secrets from Azure Key Vault instance and set as output variables. github.com/azure/actions'
inputs:
keyvault:
description: 'Name of the azure key vault'
required: true
secrets:
description: 'Name of the secret to be fetched' #Comma separated list of secret keys can be specified
required: true
branding:
icon: 'akv.svg'
color: 'blue'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/github/automatic-contrib-prs
Author: github
Publisher: github
Repository: github.com/github/automatic-contrib-prs
A GitHub Action that opens adds CONTRIBUTING.md file in repositories that dont have them.
---
name: "Automatic Contrib PRs"
author: "github"
description: "A GitHub Action that opens adds CONTRIBUTING.md file in repositories that dont have them."
runs:
using: "docker"
image: "docker://ghcr.io/github/automatic-contrib-prs:v2"
branding:
icon: "book"
color: "black"
Action ID: marketplace/github/contributors
Author: github
Publisher: github
Repository: github.com/github/contributors
A GitHub Action to report out contributors and contributions to a repository or organization
---
name: "Contributors Action"
author: "github"
description: "A GitHub Action to report out contributors and contributions to a repository or organization"
runs:
using: "docker"
image: "docker://ghcr.io/github/contributors:v1"
branding:
icon: "users"
color: "green"
Action ID: marketplace/actions/setup-java
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-java
Set up a specific version of the Java JDK and add the command-line tools to the PATH
| Name | Required | Description |
|---|---|---|
java-version |
Optional | The Java version to set up. Takes a whole or semver Java version. See examples of supported syntax in README file |
java-version-file |
Optional | The path to the `.java-version` file. See examples of supported syntax in README file |
distribution |
Required | Java distribution. See the list of supported distributions in README file |
java-package |
Optional | The package type (jdk, jre, jdk+fx, jre+fx) Default: jdk |
architecture |
Optional | The architecture of the package (defaults to the action runner's architecture) |
jdkFile |
Optional | Path to where the compressed JDK is located |
check-latest |
Optional | Set this option if you want the action to check for the latest available version that satisfies the version spec |
server-id |
Optional | ID of the distributionManagement repository in the pom.xml file. Default is `github` Default: github |
server-username |
Optional | Environment variable name for the username for authentication to the Apache Maven repository. Default is $GITHUB_ACTOR Default: GITHUB_ACTOR |
server-password |
Optional | Environment variable name for password or token for authentication to the Apache Maven repository. Default is $GITHUB_TOKEN Default: GITHUB_TOKEN |
settings-path |
Optional | Path to where the settings.xml file will be written. Default is ~/.m2. |
overwrite-settings |
Optional | Overwrite the settings.xml file if it exists. Default is "true". Default: True |
gpg-private-key |
Optional | GPG private key to import. Default is empty string. |
gpg-passphrase |
Optional | Environment variable name for the GPG private key passphrase. Default is $GPG_PASSPHRASE. |
cache |
Optional | Name of the build platform to cache dependencies. It can be "maven", "gradle" or "sbt". |
cache-dependency-path |
Optional | The path to a dependency file: pom.xml, build.gradle, build.sbt, etc. This option can be used with the `cache` option. If this option is omitted, the action searches for the dependency file in the entire repository. This option supports wildcards and a list of file names for caching multiple dependencies. |
job-status |
Optional | Workaround to pass job status to post job step. This variable is not intended for manual setting Default: ${{ job.status }} |
token |
Optional | The token used to authenticate when fetching version manifests hosted on github.com, such as for the Microsoft Build of OpenJDK. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
mvn-toolchain-id |
Optional | Name of Maven Toolchain ID if the default name of "${distribution}_${java-version}" is not wanted. See examples of supported syntax in Advanced Usage file |
mvn-toolchain-vendor |
Optional | Name of Maven Toolchain Vendor if the default name of "${distribution}" is not wanted. See examples of supported syntax in Advanced Usage file |
| Name | Description |
|---|---|
distribution |
Distribution of Java that has been installed |
version |
Actual version of the java environment that has been installed |
path |
Path to where the java environment has been installed (same as $JAVA_HOME) |
cache-hit |
A boolean value to indicate an exact match was found for the primary key |
name: 'Setup Java JDK'
description: 'Set up a specific version of the Java JDK and add the
command-line tools to the PATH'
author: 'GitHub'
inputs:
java-version:
description: 'The Java version to set up. Takes a whole or semver Java version. See examples of supported syntax in README file'
java-version-file:
description: 'The path to the `.java-version` file. See examples of supported syntax in README file'
distribution:
description: 'Java distribution. See the list of supported distributions in README file'
required: true
java-package:
description: 'The package type (jdk, jre, jdk+fx, jre+fx)'
required: false
default: 'jdk'
architecture:
description: "The architecture of the package (defaults to the action runner's architecture)"
required: false
jdkFile:
description: 'Path to where the compressed JDK is located'
required: false
check-latest:
description: 'Set this option if you want the action to check for the latest available version that satisfies the version spec'
required: false
default: false
server-id:
description: 'ID of the distributionManagement repository in the pom.xml
file. Default is `github`'
required: false
default: 'github'
server-username:
description: 'Environment variable name for the username for authentication
to the Apache Maven repository. Default is $GITHUB_ACTOR'
required: false
default: 'GITHUB_ACTOR'
server-password:
description: 'Environment variable name for password or token for
authentication to the Apache Maven repository. Default is $GITHUB_TOKEN'
required: false
default: 'GITHUB_TOKEN'
settings-path:
description: 'Path to where the settings.xml file will be written. Default is ~/.m2.'
required: false
overwrite-settings:
description: 'Overwrite the settings.xml file if it exists. Default is "true".'
required: false
default: true
gpg-private-key:
description: 'GPG private key to import. Default is empty string.'
required: false
gpg-passphrase:
description: 'Environment variable name for the GPG private key passphrase. Default is
$GPG_PASSPHRASE.'
required: false
cache:
description: 'Name of the build platform to cache dependencies. It can be "maven", "gradle" or "sbt".'
required: false
cache-dependency-path:
description: 'The path to a dependency file: pom.xml, build.gradle, build.sbt, etc. This option can be used with the `cache` option. If this option is omitted, the action searches for the dependency file in the entire repository. This option supports wildcards and a list of file names for caching multiple dependencies.'
required: false
job-status:
description: 'Workaround to pass job status to post job step. This variable is not intended for manual setting'
default: ${{ job.status }}
token:
description: The token used to authenticate when fetching version manifests hosted on github.com, such as for the Microsoft Build of OpenJDK. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
mvn-toolchain-id:
description: 'Name of Maven Toolchain ID if the default name of "${distribution}_${java-version}" is not wanted. See examples of supported syntax in Advanced Usage file'
required: false
mvn-toolchain-vendor:
description: 'Name of Maven Toolchain Vendor if the default name of "${distribution}" is not wanted. See examples of supported syntax in Advanced Usage file'
required: false
outputs:
distribution:
description: 'Distribution of Java that has been installed'
version:
description: 'Actual version of the java environment that has been installed'
path:
description: 'Path to where the java environment has been installed (same as $JAVA_HOME)'
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'
runs:
using: 'node24'
main: 'dist/setup/index.js'
post: 'dist/cleanup/index.js'
Action ID: marketplace/julia-actions/restic-action
Author: Sascha Mann
Publisher: julia-actions
Repository: github.com/julia-actions/restic-action
Backup a repo with restic
| Name | Required | Description |
|---|---|---|
token |
Required | GitHub access token Default: ${{ github.token }} |
name: restic backup action
description: Backup a repo with restic
author: Sascha Mann
branding:
icon: save
color: green
inputs:
token:
description: GitHub access token
default: ${{ github.token }}
required: true
runs:
using: composite
steps:
- name: Install dependencies
run: echo "This action has been disabled."
shell: bash
Action ID: marketplace/jidicula/black
Author: Łukasz Langa and contributors to Black
Publisher: jidicula
Repository: github.com/jidicula/black
The uncompromising Python code formatter.
| Name | Required | Description |
|---|---|---|
options |
Optional | Options passed to black. Use `black --help` to see available options. Default: '--check' Default: --check --diff |
src |
Optional | Source to run black. Default: '.' Default: . |
black_args |
Optional | [DEPRECATED] Black input arguments. |
name: "Black"
description: "The uncompromising Python code formatter."
author: "Łukasz Langa and contributors to Black"
inputs:
options:
description:
"Options passed to black. Use `black --help` to see available options. Default:
'--check'"
required: false
default: "--check --diff"
src:
description: "Source to run black. Default: '.'"
required: false
default: "."
black_args:
description: "[DEPRECATED] Black input arguments."
required: false
default: ""
branding:
color: "black"
icon: "check-circle"
runs:
using: "docker"
image: "action/Dockerfile"
Action ID: marketplace/posener/goaction-example
Author: Unknown
Publisher: posener
Repository: github.com/posener/goaction-example
Package main is an example of the simplest goaction sctipt.
| Name | Required | Description |
|---|---|---|
who |
Optional | Say hello to who Default: world |
# File generated by github.com/posener/goaction. DO NOT EDIT.
name: goaction-example
description: "Package main is an example of the simplest goaction sctipt."
inputs:
who:
default: world
description: "Say hello to who"
required: false
runs:
using: docker
image: Dockerfile
args:
- "-who=${{ inputs.who }}"
Action ID: marketplace/graalvm/setup-java
Author: GitHub
Publisher: graalvm
Repository: github.com/graalvm/setup-java
Set up a specific version of the Java JDK and add the command-line tools to the PATH
| Name | Required | Description |
|---|---|---|
java-version |
Optional | The Java version to set up. Takes a whole or semver Java version. See examples of supported syntax in README file |
java-version-file |
Optional | The path to the `.java-version` file. See examples of supported syntax in README file |
distribution |
Required | Java distribution. See the list of supported distributions in README file |
java-package |
Optional | The package type (jdk, jre, jdk+fx, jre+fx) Default: jdk |
architecture |
Optional | The architecture of the package (defaults to the action runner's architecture) |
jdkFile |
Optional | Path to where the compressed JDK is located |
check-latest |
Optional | Set this option if you want the action to check for the latest available version that satisfies the version spec |
server-id |
Optional | ID of the distributionManagement repository in the pom.xml file. Default is `github` Default: github |
server-username |
Optional | Environment variable name for the username for authentication to the Apache Maven repository. Default is $GITHUB_ACTOR Default: GITHUB_ACTOR |
server-password |
Optional | Environment variable name for password or token for authentication to the Apache Maven repository. Default is $GITHUB_TOKEN Default: GITHUB_TOKEN |
settings-path |
Optional | Path to where the settings.xml file will be written. Default is ~/.m2. |
overwrite-settings |
Optional | Overwrite the settings.xml file if it exists. Default is "true". Default: True |
gpg-private-key |
Optional | GPG private key to import. Default is empty string. |
gpg-passphrase |
Optional | Environment variable name for the GPG private key passphrase. Default is $GPG_PASSPHRASE. |
cache |
Optional | Name of the build platform to cache dependencies. It can be "maven", "gradle" or "sbt". |
job-status |
Optional | Workaround to pass job status to post job step. This variable is not intended for manual setting Default: ${{ job.status }} |
token |
Optional | The token used to authenticate when fetching version manifests hosted on github.com, such as for the Microsoft Build of OpenJDK. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
mvn-toolchain-id |
Optional | Name of Maven Toolchain ID if the default name of "${distribution}_${java-version}" is not wanted. See examples of supported syntax in Advanced Usage file |
mvn-toolchain-vendor |
Optional | Name of Maven Toolchain Vendor if the default name of "${distribution}" is not wanted. See examples of supported syntax in Advanced Usage file |
| Name | Description |
|---|---|
distribution |
Distribution of Java that has been installed |
version |
Actual version of the java environment that has been installed |
path |
Path to where the java environment has been installed (same as $JAVA_HOME) |
cache-hit |
A boolean value to indicate an exact match was found for the primary key |
name: 'Setup Java JDK'
description: 'Set up a specific version of the Java JDK and add the
command-line tools to the PATH'
author: 'GitHub'
inputs:
java-version:
description: 'The Java version to set up. Takes a whole or semver Java version. See examples of supported syntax in README file'
java-version-file:
description: 'The path to the `.java-version` file. See examples of supported syntax in README file'
distribution:
description: 'Java distribution. See the list of supported distributions in README file'
required: true
java-package:
description: 'The package type (jdk, jre, jdk+fx, jre+fx)'
required: false
default: 'jdk'
architecture:
description: "The architecture of the package (defaults to the action runner's architecture)"
required: false
jdkFile:
description: 'Path to where the compressed JDK is located'
required: false
check-latest:
description: 'Set this option if you want the action to check for the latest available version that satisfies the version spec'
required: false
default: false
server-id:
description: 'ID of the distributionManagement repository in the pom.xml
file. Default is `github`'
required: false
default: 'github'
server-username:
description: 'Environment variable name for the username for authentication
to the Apache Maven repository. Default is $GITHUB_ACTOR'
required: false
default: 'GITHUB_ACTOR'
server-password:
description: 'Environment variable name for password or token for
authentication to the Apache Maven repository. Default is $GITHUB_TOKEN'
required: false
default: 'GITHUB_TOKEN'
settings-path:
description: 'Path to where the settings.xml file will be written. Default is ~/.m2.'
required: false
overwrite-settings:
description: 'Overwrite the settings.xml file if it exists. Default is "true".'
required: false
default: true
gpg-private-key:
description: 'GPG private key to import. Default is empty string.'
required: false
gpg-passphrase:
description: 'Environment variable name for the GPG private key passphrase. Default is
$GPG_PASSPHRASE.'
required: false
cache:
description: 'Name of the build platform to cache dependencies. It can be "maven", "gradle" or "sbt".'
required: false
job-status:
description: 'Workaround to pass job status to post job step. This variable is not intended for manual setting'
default: ${{ job.status }}
token:
description: The token used to authenticate when fetching version manifests hosted on github.com, such as for the Microsoft Build of OpenJDK. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
mvn-toolchain-id:
description: 'Name of Maven Toolchain ID if the default name of "${distribution}_${java-version}" is not wanted. See examples of supported syntax in Advanced Usage file'
required: false
mvn-toolchain-vendor:
description: 'Name of Maven Toolchain Vendor if the default name of "${distribution}" is not wanted. See examples of supported syntax in Advanced Usage file'
required: false
outputs:
distribution:
description: 'Distribution of Java that has been installed'
version:
description: 'Actual version of the java environment that has been installed'
path:
description: 'Path to where the java environment has been installed (same as $JAVA_HOME)'
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'
runs:
using: 'node16'
main: 'dist/setup/index.js'
post: 'dist/cleanup/index.js'
Action ID: marketplace/amirisback/frogo-loading-indicator-view
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/frogo-loading-indicator-view
Loading for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Loading-Indicator-View'
description: 'Loading for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/dflook/tofu-remote-state
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-remote-state
Retrieves the root-level outputs from an OpenTofu remote state.
| Name | Required | Description |
|---|---|---|
backend_type |
Required | The name of the OpenTofu plugin used for backend state |
workspace |
Optional | OpenTofu workspace to get the outputs for Default: default |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
json_output_path |
This is the path to all the root module outputs in a JSON file. The path is relative to the Actions workspace. For example, with the OpenTofu config: ```hcl output "service_hostname" { value = "example.com" } ``` The file pointed to by this output will contain: ```json { "service_hostname": "example.com" } ``` OpenTofu list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object. |
name: tofu-remote-state
description: Retrieves the root-level outputs from an OpenTofu remote state.
author: Daniel Flook
inputs:
backend_type:
description: The name of the OpenTofu plugin used for backend state
required: true
workspace:
description: OpenTofu workspace to get the outputs for
required: false
default: "default"
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
outputs:
json_output_path:
description: |
This is the path to all the root module outputs in a JSON file.
The path is relative to the Actions workspace.
For example, with the OpenTofu config:
```hcl
output "service_hostname" {
value = "example.com"
}
```
The file pointed to by this output will contain:
```json
{
"service_hostname": "example.com"
}
```
OpenTofu list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/remote-state.sh
branding:
icon: globe
color: purple
Action ID: marketplace/wei/pull-request
Author: Wei He <github@weispot.com>
Publisher: wei
Repository: github.com/wei/pull-request
⤵️ Create pull request
| Name | Required | Description |
|---|---|---|
destination_repository |
Optional | Repository (user/repo) to create the pull request in, falls back to checkout repository or triggered repository |
source_branch |
Optional | Branch name to pull from, default is triggered branch |
destination_branch |
Optional | Branch name to sync to in this repo, default is master Default: master |
pr_title |
Optional | Pull request title |
pr_body |
Optional | Pull request body |
pr_template |
Optional | Pull request template |
pr_reviewer |
Optional | Pull request reviewers, comma-separated list (no spaces) |
pr_assignee |
Optional | Pull request assignees, comma-separated list (no spaces) |
pr_label |
Optional | Pull request labels, comma-separated list (no spaces) |
pr_milestone |
Optional | Pull request milestone |
pr_draft |
Optional | Draft pull request |
pr_allow_empty |
Optional | Create PR even if no changes |
working_directory |
Optional | Change working directory |
github_token |
Required | GitHub token secret Default: ${{ github.token }} |
debug |
Optional | Bash set -x debugging mode |
| Name | Description |
|---|---|
pr_url |
Pull request URL |
pr_number |
Pull request number |
pr_created |
Boolean string indicating if a pull request was created from the action run |
has_changed_files |
Boolean string indicating whether any file has been changed |
name: GitHub Pull Request Action
author: Wei He <github@weispot.com>
description: ⤵️ Create pull request
branding:
icon: 'git-pull-request'
color: 'gray-dark'
inputs:
destination_repository:
description: Repository (user/repo) to create the pull request in, falls back to checkout repository or triggered repository
required: false
source_branch:
description: Branch name to pull from, default is triggered branch
required: false
destination_branch:
description: Branch name to sync to in this repo, default is master
required: false
default: master
pr_title:
description: Pull request title
required: false
pr_body:
description: Pull request body
required: false
pr_template:
description: Pull request template
required: false
pr_reviewer:
description: Pull request reviewers, comma-separated list (no spaces)
required: false
pr_assignee:
description: Pull request assignees, comma-separated list (no spaces)
required: false
pr_label:
description: Pull request labels, comma-separated list (no spaces)
required: false
pr_milestone:
description: Pull request milestone
required: false
pr_draft:
description: Draft pull request
required: false
pr_allow_empty:
description: Create PR even if no changes
required: false
working_directory:
description: Change working directory
required: false
github_token:
description: GitHub token secret
required: true
default: ${{ github.token }}
debug:
description: Bash set -x debugging mode
required: false
outputs:
pr_url:
description: 'Pull request URL'
pr_number:
description: 'Pull request number'
pr_created:
description: 'Boolean string indicating if a pull request was created from the action run'
has_changed_files:
description: 'Boolean string indicating whether any file has been changed'
runs:
using: 'docker'
image: Dockerfile
env:
GITHUB_TOKEN: ${{ inputs.github_token }}
Action ID: marketplace/appleboy/lambda-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/lambda-action
Deploying Lambda code to an existing function
| Name | Required | Description |
|---|---|---|
aws_region |
Optional | AWS Region Default: us-east-1 |
aws_access_key_id |
Optional | AWS ACCESS KEY |
aws_secret_access_key |
Optional | AWS SECRET KEY |
aws_session_token |
Optional | AWS Session token |
aws_profile |
Optional | AWS profile |
function_name |
Optional | AWS lambda function name |
s3_bucket |
Optional | An Amazon S3 bucket in the same AWS Region as your function. The bucket can be in a different AWS account. |
s3_key |
Optional | The Amazon S3 key of the deployment package. |
s3_object_version |
Optional | AWS lambda s3 object version |
zip_file |
Optional | AWS lambda zip file |
source |
Optional | zip file list |
dry_run |
Optional | Set to true to validate the request parameters and access permissions without modifying the function code. |
debug |
Optional | Show debug message after upload the lambda successfully. |
publish |
Optional | Set to true to publish a new version of the function after updating the code. Default: True |
reversion_id |
Optional | Only update the function if the revision ID matches the ID that is specified. |
memory_size |
Optional | The amount of memory that your function has access to. Increasing the function memory also increases its CPU allocation. The default value is 128 MB. The value must be a multiple of 64 MB. |
timeout |
Optional | The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds. |
handler |
Optional | The name of the method within your code that Lambda calls to execute your function. |
role |
Optional | The Amazon Resource Name (ARN) of the function execution role. |
runtime |
Optional | The identifier of the function runtime. |
environment |
Optional | Lambda Environment variables. |
layers |
Optional | A list of function layers, to add to the function execution environment. Specify each layer by its ARN, including the version |
image_uri |
Optional | URI of a container image in the Amazon ECR registry. |
subnets |
Optional | Select the VPC subnets for Lambda to use to set up your VPC configuration. |
securitygroups |
Optional | Choose the VPC security groups for Lambda to use to set up your VPC configuration. |
description |
Optional | A description of the function. |
tracing_mode |
Optional | Set Mode to Active to sample and trace a subset of incoming requests with AWS X-Ray. |
max_attempts |
Optional | the maximum number of times the waiter should attempt to check the resource for the target state Default: 600 |
architectures |
Optional | The instruction set architecture that the function supports. Architecture is a string array with one of the valid values. The default architecture value is x86_64. |
ipv6_dual_stack |
Optional | Enables or disables dual-stack IPv6 support in the VPC configuration |
name: "AWS Lambda Deploy"
description: "Deploying Lambda code to an existing function"
author: "Bo-Yi Wu"
inputs:
aws_region:
description: "AWS Region"
default: "us-east-1"
aws_access_key_id:
description: "AWS ACCESS KEY"
aws_secret_access_key:
description: "AWS SECRET KEY"
aws_session_token:
description: "AWS Session token"
aws_profile:
description: "AWS profile"
function_name:
description: "AWS lambda function name"
s3_bucket:
description: "An Amazon S3 bucket in the same AWS Region as your function. The bucket can be in a different AWS account."
s3_key:
description: "The Amazon S3 key of the deployment package."
s3_object_version:
description: "AWS lambda s3 object version"
zip_file:
description: "AWS lambda zip file"
source:
description: "zip file list"
dry_run:
description: "Set to true to validate the request parameters and access permissions without modifying the function code."
debug:
description: "Show debug message after upload the lambda successfully."
publish:
description: "Set to true to publish a new version of the function after updating the code."
default: true
reversion_id:
description: "Only update the function if the revision ID matches the ID that is specified."
memory_size:
description: "The amount of memory that your function has access to. Increasing the function memory also increases its CPU allocation. The default value is 128 MB. The value must be a multiple of 64 MB."
default: 0
timeout:
description: "The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds."
default: 0
handler:
description: "The name of the method within your code that Lambda calls to execute your function."
role:
description: "The Amazon Resource Name (ARN) of the function execution role."
runtime:
description: "The identifier of the function runtime."
environment:
description: "Lambda Environment variables."
layers:
description: "A list of function layers, to add to the function execution environment. Specify each layer by its ARN, including the version"
image_uri:
description: "URI of a container image in the Amazon ECR registry."
subnets:
description: "Select the VPC subnets for Lambda to use to set up your VPC configuration."
securitygroups:
description: "Choose the VPC security groups for Lambda to use to set up your VPC configuration."
description:
description: "A description of the function."
tracing_mode:
description: "Set Mode to Active to sample and trace a subset of incoming requests with AWS X-Ray."
max_attempts:
description: "the maximum number of times the waiter should attempt to check the resource for the target state"
default: 600
architectures:
description: "The instruction set architecture that the function supports. Architecture is a string array with one of the valid values. The default architecture value is x86_64."
ipv6_dual_stack:
description: "Enables or disables dual-stack IPv6 support in the VPC configuration"
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "layers"
color: "gray-dark"
Action ID: marketplace/creyD/autopep8_action
Author: Conrad Großer <grosserconrad@gmail.com>
Publisher: creyD
Repository: github.com/creyD/autopep8_action
Automatically runs the autopep8 command on all your changes.
| Name | Required | Description |
|---|---|---|
commit_message |
Optional | Commit message Default: Adjusted files for PEP-8 compliance |
commit_options |
Optional | Commit options |
file_pattern |
Optional | File pattern used for `git add` Default: * |
checkpath |
Optional | Path autopep8 checks Default: . |
options |
Optional | Parameters for autopep8 |
dry |
Optional | Should this script apply autopep8 directly or just warn? |
no_commit |
Optional | Can be used to avoid committing the changes |
github_token |
Optional | GitHub Token or PAT token used to authenticate against a repository Default: ${{ github.token }} |
name: Autopep8 Action
description: Automatically runs the autopep8 command on all your changes.
author: Conrad Großer <grosserconrad@gmail.com>
inputs:
commit_message:
description: Commit message
required: false
default: 'Adjusted files for PEP-8 compliance'
commit_options:
description: Commit options
required: false
file_pattern:
description: File pattern used for `git add`
required: false
default: '*'
checkpath:
description: Path autopep8 checks
required: false
default: '.'
options:
description: Parameters for autopep8
required: false
default: ''
dry:
description: Should this script apply autopep8 directly or just warn?
required: false
default: false
no_commit:
description: Can be used to avoid committing the changes
required: false
default: false
github_token:
description: GitHub Token or PAT token used to authenticate against a repository
required: false
default: ${{ github.token }}
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'fast-forward'
color: 'green'
Action ID: marketplace/github/super-linter
Author: Super-linter contributors
Publisher: github
Repository: github.com/github/super-linter
Super-linter is a ready-to-run collection of linters and code analyzers, to help validate your source code.
---
name: "Super-Linter"
author: "Super-linter contributors"
description: "Super-linter is a ready-to-run collection of linters and code analyzers, to help validate your source code."
runs:
using: "docker"
image: "docker://ghcr.io/super-linter/super-linter:v7.1.0" # x-release-please-version
branding:
icon: "check-square"
color: "white"
# You can view https://github.com/super-linter/super-linter#environment-variables
# to see a comprehensive list of all environment variables that can be passed
Action ID: marketplace/mheap/build-and-tag-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/build-and-tag-action
Properly tags your GitHub Action
| Name | Required | Description |
|---|---|---|
tag_name |
Optional | The tag to update. If the workflow event is `release`, it will use the `tag_name` from the event payload. |
name: Build and Tag
description: Properly tags your GitHub Action
runs:
using: node12
main: dist/index.js
branding:
icon: archive
color: blue
inputs:
tag_name:
description: The tag to update. If the workflow event is `release`, it will use the `tag_name` from the event payload.
Action ID: marketplace/peter-evans/link-checker
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/link-checker
An action for link checking repository Markdown and HTML files
| Name | Required | Description |
|---|---|---|
args |
Optional | Liche arguments Default: -v -r * |
name: 'Link Checker'
description: 'An action for link checking repository Markdown and HTML files'
inputs:
args:
description: 'Liche arguments'
default: '-v -r *'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'external-link'
color: 'purple'
Action ID: marketplace/actions/publish-immutable-action
Author: Unknown
Publisher: actions
Repository: github.com/actions/publish-immutable-action
Publish actions as OCI artifacts to GHCR
| Name | Required | Description |
|---|---|---|
github-token |
Optional | The GitHub actions token used to authenticate with GitHub APIs Default: ${{ github.token }} |
| Name | Description |
|---|---|
package-manifest-sha |
A sha256 hash of the package manifest |
attestation-manifest-sha |
The sha256 of the provenance attestation uploaded to GHCR. This is not present if the package is not attested, e.g. in enterprise environments. |
referrer-index-manifest-sha |
The sha256 of the referrer index uploaded to GHCR. This is not present if the package is not attested, e.g. in enterprise environments. |
name: 'Package and Publish'
description: 'Publish actions as OCI artifacts to GHCR'
# TODO: Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: 'heart'
color: 'red'
inputs:
github-token:
description: 'The GitHub actions token used to authenticate with GitHub APIs'
default: ${{ github.token }}
outputs:
package-manifest-sha:
description: 'A sha256 hash of the package manifest'
attestation-manifest-sha:
description: 'The sha256 of the provenance attestation uploaded to GHCR. This is not present if the package is not attested, e.g. in enterprise environments.'
referrer-index-manifest-sha:
description: 'The sha256 of the referrer index uploaded to GHCR. This is not present if the package is not attested, e.g. in enterprise environments.'
runs:
using: node20
main: dist/index.js
Action ID: marketplace/TryGhost/action-update-posts
Author: Unknown
Publisher: TryGhost
Repository: github.com/TryGhost/action-update-posts
Schedule updates to your Ghost posts
| Name | Required | Description |
|---|---|---|
api-url |
Required | Ghost Admin API Url |
api-key |
Required | Ghost Admin API Key |
tag |
Required | The tag to lookup to find posts to update e.g. `hash-early-access` |
field |
Required | The post field that you want to update e.g. `visibility` or `featured` |
value |
Required | The new value for the field e.g. `public` or `false` |
days |
Required | Number of days after the post was published to update the post |
name: 'Update Ghost Posts'
description: 'Schedule updates to your Ghost posts'
branding:
icon: 'cloud-lightning'
color: 'gray-dark'
inputs:
api-url:
description: 'Ghost Admin API Url'
required: true
api-key:
description: 'Ghost Admin API Key'
required: true
tag:
description: 'The tag to lookup to find posts to update e.g. `hash-early-access`'
required: true
field:
description: 'The post field that you want to update e.g. `visibility` or `featured`'
required: true
value:
description: 'The new value for the field e.g. `public` or `false`'
required: true
days:
description: 'Number of days after the post was published to update the post'
required: true
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/atlassian/gajira-create
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-create
Create a new Jira issue
| Name | Required | Description |
|---|---|---|
project |
Required | Key of the project |
issuetype |
Required | Type of the issue to be created. Example: 'Incident' |
summary |
Required | Issue summary |
description |
Optional | Issue description |
fields |
Optional | Additional fields in JSON format |
| Name | Description |
|---|---|
issue |
Key of the newly created issue |
name: Jira Create issue
description: Create a new Jira issue
branding:
icon: 'check-square'
color: 'blue'
inputs:
project:
description: Key of the project
required: true
issuetype:
description: "Type of the issue to be created. Example: 'Incident'"
required: true
summary:
description: Issue summary
required: true
description:
description: Issue description
required: false
fields:
description: Additional fields in JSON format
required: false
outputs:
issue:
description: Key of the newly created issue
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/aws-actions/aws-secretsmanager-get-secrets
Author: AWS Secrets Manager
Publisher: aws-actions
Repository: github.com/aws-actions/aws-secretsmanager-get-secrets
GitHub action for retrieving secrets from AWS Secrets Manager
| Name | Required | Description |
|---|---|---|
secret-ids |
Required | One or more secret names, secret ARNs, or secret prefixes to retrieve |
parse-json-secrets |
Optional | (Optional) If true, JSON secrets will be deserialized, creating a secret environment variable for each key-value pair. Default: false |
name-transformation |
Optional | (Optional) Transforms environment variable name. Options: uppercase, lowercase, none. Default value: uppercase. Default: uppercase |
auto-select-family-attempt-timeout |
Optional | (Optional) Timeout (ms) for dual-stack DNS first IP connection attempt. Needed for geographically distant GitHub action workers Default: 1000 |
name: 'AWS Secrets Manager GitHub Action'
author: 'AWS Secrets Manager'
description: 'GitHub action for retrieving secrets from AWS Secrets Manager'
branding:
icon: 'cloud'
color: 'orange'
inputs:
secret-ids:
description: 'One or more secret names, secret ARNs, or secret prefixes to retrieve'
required: true
parse-json-secrets:
description: '(Optional) If true, JSON secrets will be deserialized, creating a secret environment variable for each key-value pair.'
required: false
default: 'false'
name-transformation:
description: '(Optional) Transforms environment variable name. Options: uppercase, lowercase, none. Default value: uppercase.'
required: false
default: 'uppercase'
auto-select-family-attempt-timeout:
description: '(Optional) Timeout (ms) for dual-stack DNS first IP connection attempt. Needed for geographically distant GitHub action workers'
required: false
default: '1000'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/cleanup/index.js'
Action ID: marketplace/goto-bus-stop/setup-zig
Author: goto-bus-stop
Publisher: goto-bus-stop
Repository: github.com/goto-bus-stop/setup-zig
Please use mlugg/setup-zig instead
| Name | Required | Description |
|---|---|---|
version |
Required | Version of the zig compiler to use (must be 0.3.0 or up) Default: master |
cache |
Optional | Cache downloaded compilers for faster action runs. Strongly recommended. Default: true |
name: 'Setup Zig (legacy)'
description: 'Please use mlugg/setup-zig instead'
branding:
icon: play
color: orange
author: 'goto-bus-stop'
inputs:
version:
description: 'Version of the zig compiler to use (must be 0.3.0 or up)'
required: true
default: 'master'
cache:
description: 'Cache downloaded compilers for faster action runs. Strongly recommended.'
required: false
default: 'true'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/azure/webapps-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/webapps-deploy
Deploy Web Apps/Containerized Web Apps to Azure. github.com/Azure/Actions
| Name | Required | Description |
|---|---|---|
app-name |
Required | Name of the Azure Web App |
publish-profile |
Optional | Applies to Web Apps(Windows and Linux) and Web App Containers(linux). Multi container scenario not supported. Publish profile (*.publishsettings) file contents with Web Deploy secrets |
slot-name |
Optional | Enter an existing Slot other than the Production slot Default: production |
package |
Optional | Applies to Web App only: Path to package or folder. *.zip, *.war, *.jar or a folder to deploy Default: . |
images |
Optional | Applies to Web App Containers only: Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For multi-container scenario multiple container image names can be provided (multi-line separated) |
configuration-file |
Optional | Applies to Web App Containers only: Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container scenario |
startup-command |
Optional | Enter the start up command. For ex. dotnet run or dotnet run |
resource-group-name |
Optional | Enter the resource group name of the web app |
type |
Optional | Enter deployment type (JAR, WAR, EAR, ZIP, Static) |
target-path |
Optional | Target path in the web app. For ex. '/home/site/wwwroot' |
clean |
Optional | Delete existing files target directory before deploying |
restart |
Optional | Restart the app service after deployment |
sitecontainers-config |
Optional | Applies to Sitecontainers, containes a list of siteContainer specs |
| Name | Description |
|---|---|
webapp-url |
URL to work with your webapp |
# web app action
name: 'Azure WebApp'
description: 'Deploy Web Apps/Containerized Web Apps to Azure. github.com/Azure/Actions'
inputs:
app-name:
description: 'Name of the Azure Web App'
required: true
publish-profile:
description: 'Applies to Web Apps(Windows and Linux) and Web App Containers(linux). Multi container scenario not supported. Publish profile (*.publishsettings) file contents with Web Deploy secrets'
required: false
slot-name:
description: 'Enter an existing Slot other than the Production slot'
required: false
default: 'production'
package:
description: 'Applies to Web App only: Path to package or folder. *.zip, *.war, *.jar or a folder to deploy'
required: false
default: '.'
images:
description: "Applies to Web App Containers only: Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For multi-container scenario multiple container image names can be provided (multi-line separated)"
required: false
configuration-file:
description: 'Applies to Web App Containers only: Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container scenario'
required: false
startup-command:
description: 'Enter the start up command. For ex. dotnet run or dotnet run'
required: false
resource-group-name:
description: 'Enter the resource group name of the web app'
required: false
type:
description: 'Enter deployment type (JAR, WAR, EAR, ZIP, Static)'
required: false
target-path:
description: "Target path in the web app. For ex. '/home/site/wwwroot'"
required: false
clean:
description: 'Delete existing files target directory before deploying'
required: false
restart:
description: 'Restart the app service after deployment'
required: false
sitecontainers-config:
description: 'Applies to Sitecontainers, containes a list of siteContainer specs'
required: false
outputs:
webapp-url:
description: 'URL to work with your webapp'
branding:
icon: 'webapp.svg'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/upsidr/merge-gatekeeper
Author: Unknown
Publisher: upsidr
Repository: github.com/upsidr/merge-gatekeeper
Get better merge control
| Name | Required | Description |
|---|---|---|
token |
Required | set github token |
self |
Optional | set self job name Default: merge-gatekeeper |
interval |
Optional | set validate interval second (default 5) Default: 5 |
timeout |
Optional | set validate timeout second (default 600) Default: 600 |
ignored |
Optional | set ignored jobs (comma-separated list) |
ref |
Optional | set ref of github repository. the ref can be a SHA, a branch name, or tag name Default: ${{ github.event.pull_request.head.sha }} |
name: "Merge Gatekeeper"
description: "Get better merge control"
branding:
icon: git-merge
color: orange
inputs:
token:
description: "set github token"
required: true
self:
description: "set self job name"
required: false
default: "merge-gatekeeper"
interval:
description: "set validate interval second (default 5)"
required: false
default: "5"
timeout:
description: "set validate timeout second (default 600)"
required: false
default: "600"
ignored:
description: "set ignored jobs (comma-separated list)"
required: false
default: ""
ref:
description: "set ref of github repository. the ref can be a SHA, a branch name, or tag name"
required: false
default: ${{ github.event.pull_request.head.sha }}
runs:
using: "docker"
image: "Dockerfile"
args:
- "validate"
- "--token=${{ inputs.token }}"
- "--self=${{ inputs.self }}"
- "--interval=${{ inputs.interval }}"
- "--ref=${{ inputs.ref }}"
- "--timeout=${{ inputs.timeout }}"
- "--ignored=${{ inputs.ignored }}"
Action ID: marketplace/sobolevn/misspell-fixer-action
Author: Unknown
Publisher: sobolevn
Repository: github.com/sobolevn/misspell-fixer-action
Runs misspell-fixer as a GitHub Action
| Name | Required | Description |
|---|---|---|
options |
Optional | Any options for misspell-fixes tool Default: -rsvn . |
| Name | Description |
|---|---|
output |
The output of misspell-fixer run |
# This is a definition file for a GitHub Action.
# See: https://help.github.com/en/articles/creating-a-docker-container-action
# We also define metadata here:
# See: https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'misspell-fixer-action'
description: 'Runs misspell-fixer as a GitHub Action'
branding:
icon: 'bookmark'
color: 'green'
inputs:
options:
description: 'Any options for misspell-fixes tool'
required: false
default: '-rsvn .'
outputs:
output:
description: 'The output of misspell-fixer run'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.options }}
Action ID: marketplace/appleboy/telegram-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/telegram-action
Sending a Telegram message
| Name | Required | Description |
|---|---|---|
to |
Optional | telegram user |
token |
Optional | telegram token |
message |
Optional | telegram message |
message_file |
Optional | overwrite the default message template with the contents of the specified file |
socks5 |
Optional | support socks5 proxy URL |
photo |
Optional | send the photo message. |
document |
Optional | send the document message. |
sticker |
Optional | send the sticker message. |
audio |
Optional | send the audio message. |
voice |
Optional | send the voice message. |
location |
Optional | send the location message. |
venue |
Optional | send the venue message. |
video |
Optional | send the video message. |
debug |
Optional | enable debug mode. |
format |
Optional | message format: markdown or html |
disable_web_page_preview |
Optional | disables link previews for links in this message |
disable_notification |
Optional | disables notifications for this message, supports sending a message without notification, |
name: 'Telegram Message Notify'
description: 'Sending a Telegram message'
author: 'Bo-Yi Wu'
inputs:
to:
description: 'telegram user'
token:
description: 'telegram token'
message:
description: 'telegram message'
message_file:
description: 'overwrite the default message template with the contents of the specified file'
socks5:
description: 'support socks5 proxy URL'
photo:
description: 'send the photo message.'
document:
description: 'send the document message.'
sticker:
description: 'send the sticker message.'
audio:
description: 'send the audio message.'
voice:
description: 'send the voice message.'
location:
description: 'send the location message.'
venue:
description: 'send the venue message.'
video:
description: 'send the video message.'
debug:
description: 'enable debug mode.'
format:
description: 'message format: markdown or html'
disable_web_page_preview:
description: 'disables link previews for links in this message'
disable_notification:
description: 'disables notifications for this message, supports sending a message without notification,'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'message-square'
color: 'blue'
Action ID: marketplace/docker/setup-qemu-action
Author: docker
Publisher: docker
Repository: github.com/docker/setup-qemu-action
Install QEMU static binaries
| Name | Required | Description |
|---|---|---|
image |
Optional | QEMU static binaries Docker image (e.g. tonistiigi/binfmt:latest) Default: docker.io/tonistiigi/binfmt:latest |
platforms |
Optional | Platforms to install (e.g. arm64,riscv64,arm) Default: all |
cache-image |
Optional | Cache binfmt image to GitHub Actions cache backend Default: true |
| Name | Description |
|---|---|
platforms |
Available platforms (comma separated) |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Docker Setup QEMU'
description: 'Install QEMU static binaries'
author: 'docker'
branding:
icon: 'anchor'
color: 'blue'
inputs:
image:
description: 'QEMU static binaries Docker image (e.g. tonistiigi/binfmt:latest)'
default: 'docker.io/tonistiigi/binfmt:latest'
required: false
platforms:
description: 'Platforms to install (e.g. arm64,riscv64,arm)'
default: 'all'
required: false
cache-image:
description: 'Cache binfmt image to GitHub Actions cache backend'
default: 'true'
required: false
outputs:
platforms:
description: 'Available platforms (comma separated)'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/azure/aml-deploy
Author: azure/gh-aml
Publisher: azure
Repository: github.com/azure/aml-deploy
Deploy a registered model in your Azure Machine Learning Workspace with this GitHub Action
| Name | Required | Description |
|---|---|---|
azure_credentials |
Required | Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS |
model_name |
Required | Name of the model that will be deployed |
model_version |
Required | Version of the model that will be deployed |
parameters_file |
Required | JSON file including the parameters for deployment. This looks in the .ml/.azure/ directory Default: deploy.json |
| Name | Description |
|---|---|
service_scoring_uri |
Scoring URI of the webservice that was created (only provided if delete_service_after_deployment is set to False) |
service_swagger_uri |
Swagger URI of the webservice that was created (only provided if delete_service_after_deployment is set to False) |
acr_address |
The DNS name or IP address (e.g. myacr.azurecr.io) of the Azure Container Registry (ACR) (only provided if create_image is not None) |
acr_username |
The username for ACR (only provided if create_image is not None) |
acr_password |
The password for ACR (only provided if create_image is not None) |
package_location |
Full URI of the docker image (e.g. myacr.azurecr.io/azureml/azureml_*) (only provided if create_image is not None) |
profiling_details |
Dictionary of details of the model profiling result. This will only be provided, if the model profiling method is used and successfully executed. |
name: "Azure Machine Learning Deploy Action"
description: "Deploy a registered model in your Azure Machine Learning Workspace with this GitHub Action"
author: "azure/gh-aml"
inputs:
azure_credentials:
description: "Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS"
required: true
model_name:
description: "Name of the model that will be deployed"
required: true
model_version:
description: "Version of the model that will be deployed"
required: true
parameters_file:
description: "JSON file including the parameters for deployment. This looks in the .ml/.azure/ directory"
required: true
default: "deploy.json"
outputs:
service_scoring_uri:
description: "Scoring URI of the webservice that was created (only provided if delete_service_after_deployment is set to False)"
service_swagger_uri:
description: "Swagger URI of the webservice that was created (only provided if delete_service_after_deployment is set to False)"
acr_address:
description: "The DNS name or IP address (e.g. myacr.azurecr.io) of the Azure Container Registry (ACR) (only provided if create_image is not None)"
acr_username:
description: "The username for ACR (only provided if create_image is not None)"
acr_password:
description: "The password for ACR (only provided if create_image is not None)"
package_location:
description: "Full URI of the docker image (e.g. myacr.azurecr.io/azureml/azureml_*) (only provided if create_image is not None)"
profiling_details:
description: "Dictionary of details of the model profiling result. This will only be provided, if the model profiling method is used and successfully executed."
branding:
icon: "chevron-up"
color: "blue"
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/aws-actions/vulnerability-scan-github-action-for-amazon-inspector
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/vulnerability-scan-github-action-for-amazon-inspector
Generate SBOMs and scan for vulnerabilities in artifacts such as files, directories, containers, and more.
| Name | Required | Description |
|---|---|---|
artifact_type |
Required | The artifact you would like to scan with Amazon Inspector. Valid choices are "repository", "container", "binary", or "archive". Default: repository |
artifact_path |
Required | The file path to the artifact you would like to scan with Amazon Inspector. File paths are relative to the root project directory. If scanning a container image, you must provide a value that follows the docker pull convention: "NAME[:TAG|@DIGEST]", for example, "alpine:latest", or a path to an image exported as tarball using "docker save". Default: ./ |
display_vulnerability_findings |
Required | If set to "enabled", the action will display detailed vulnerability findings in the step summary page; see here for an example report: https://github.com/aws-actions/vulnerability-scan-github-action-for-amazon-inspector/actions/runs/8878213714 Default: disabled |
output_sbom_path |
Optional | The destination file path for the generated SBOM. Default: ./sbom_${{ github.run_id }}.json |
output_inspector_scan_path |
Optional | The destination file path for Inspector's vulnerability scan (JSON format). Default: inspector_scan_${{ github.run_id }}.json |
output_inspector_scan_path_csv |
Optional | The destination file path for Inspector's vulnerability scan (CSV format). Default: inspector_scan_${{ github.run_id }}.csv |
output_inspector_scan_path_markdown |
Optional | The destination file path for Inspector's vulnerability scan (markdown format). Default: inspector_scan_${{ github.run_id }}.md |
output_inspector_dockerfile_scan_path_csv |
Optional | The destination file path for Inspector's Dockerfile vulnerability scan (CSV format). Default: inspector_dockerfile_scan_${{ github.run_id }}.csv |
output_inspector_dockerfile_scan_path_markdown |
Optional | The destination file path for Inspector's Dockerfile vulnerability scan (markdown format). Default: inspector_dockerfile_scan_${{ github.run_id }}.md |
sbomgen_version |
Optional | The inspector-sbomgen version you wish to use for SBOM generation. See here for more info: https://docs.aws.amazon.com/inspector/latest/user/sbom-generator.html Default: latest |
critical_threshold |
Optional | Specifies the number of critical vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag. |
high_threshold |
Optional | Specifies the number of high vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag. |
medium_threshold |
Optional | Specifies the number of medium vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag. |
low_threshold |
Optional | Specifies the number of low vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag. |
other_threshold |
Optional | Specifies the number of other vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag. |
scanners |
Optional | Specifies the file scanners that you would like inspector-sbomgen to execute. By default, inspector-sbomgen will try to run all file scanners that are applicable to the target artifact. If this argument is set, inspector-sbomgen will only execute the specified file scanners. Provide your input as a single string. Separate each file scanner with a comma. To view a list of available file scanners, execute 'inspector-sbomgen list-scanners'. See here for more info: https://docs.aws.amazon.com/inspector/latest/user/sbom-generator.html Default: '' |
skip_scanners |
Optional | Specifies a list of file scanners that should NOT be executed; this argument cannot be combined with 'scanners'. If this argument is set, inspector-sbomgen will execute all file scanners except those you specified. Provide your input as a single string. Separate each file scanner with a comma. To view a list of available file scanners, execute 'inspector-sbomgen list-scanners'. See here for more info: https://docs.aws.amazon.com/inspector/latest/user/sbom-generator.html Default: '' |
skip_files |
Optional | Specifies one or more files and/or directories that should NOT be inventoried. Separate each file with a comma and enclose the entire string in double quotes. Default: '' |
timeout |
Optional | Specifies a timeout in seconds. If this timeout is exceeded, the action will gracefully conclude and present any findings discovered up to that point. Default: 600 |
platform |
Optional | Specifies the OS and CPU arch of the container image you wish to scan. Valid inputs are of the form 'os/cpu/variant' for example, 'linux/amd64', 'linux/arm64/v8', etc. If no platform is specified, the system will use the same platform as the host that is performing the scan. This argument only affects container image scans. Requires inspector-sbomgen 1.5.1 or later. |
threshold_fixable_only |
Optional | If set to true, only count vulnerabilities with a fix towards threshold exceeded condition. |
show_only_fixable_vulns |
Optional | If set to true, this action will show only fixed vulnerabilities in the GitHub Actions step summary page. All vulnerability metadata is still retained in the raw Inspector scan files. |
| Name | Description |
|---|---|
artifact_sbom |
The filepath to the artifact's software bill of materials. |
inspector_scan_results |
The file path to the Inspector vulnerability scan findings in JSON format. |
inspector_scan_results_csv |
The file path to the Inspector vulnerability scan findings in CSV format. |
inspector_scan_results_markdown |
The file path to the Inspector vulnerability scan findings in markdown format. |
inspector_dockerile_scan_results_csv |
The file path to the Inspector Dockerfile vulnerability scan findings in CSV format. |
inspector_dockerile_scan_results_markdown |
The file path to the Inspector Dockerfile vulnerability scan findings in markdown format. |
vulnerability_threshold_exceeded |
This variable is set to 1 if any vulnerability threshold was exceeded, otherwise it is 0. This variable can be used to trigger custom logic, such as failing the job if vulnerabilities were detected. |
name: 'Vulnerability Scan GitHub Action for Amazon Inspector'
description: 'Generate SBOMs and scan for vulnerabilities in artifacts such as files, directories, containers, and more.'
branding:
color: orange
icon: cloud
inputs:
artifact_type:
description: 'The artifact you would like to scan with Amazon Inspector. Valid choices are "repository", "container", "binary", or "archive".'
required: True
default: 'repository'
artifact_path:
description: 'The file path to the artifact you would like to scan with Amazon Inspector. File paths are relative to the root project directory. If scanning a container image, you must provide a value that follows the docker pull convention: "NAME[:TAG|@DIGEST]", for example, "alpine:latest", or a path to an image exported as tarball using "docker save".'
required: True
default: './'
display_vulnerability_findings:
description: 'If set to "enabled", the action will display detailed vulnerability findings in the step summary page; see here for an example report: https://github.com/aws-actions/vulnerability-scan-github-action-for-amazon-inspector/actions/runs/8878213714'
required: True
default: "disabled"
output_sbom_path:
description: "The destination file path for the generated SBOM."
required: False
default: './sbom_${{ github.run_id }}.json'
output_inspector_scan_path:
description: "The destination file path for Inspector's vulnerability scan (JSON format)."
required: False
default: 'inspector_scan_${{ github.run_id }}.json'
output_inspector_scan_path_csv:
description: "The destination file path for Inspector's vulnerability scan (CSV format)."
required: False
default: 'inspector_scan_${{ github.run_id }}.csv'
output_inspector_scan_path_markdown:
description: "The destination file path for Inspector's vulnerability scan (markdown format)."
required: False
default: 'inspector_scan_${{ github.run_id }}.md'
output_inspector_dockerfile_scan_path_csv:
description: "The destination file path for Inspector's Dockerfile vulnerability scan (CSV format)."
required: False
default: 'inspector_dockerfile_scan_${{ github.run_id }}.csv'
output_inspector_dockerfile_scan_path_markdown:
description: "The destination file path for Inspector's Dockerfile vulnerability scan (markdown format)."
required: False
default: 'inspector_dockerfile_scan_${{ github.run_id }}.md'
sbomgen_version:
description: "The inspector-sbomgen version you wish to use for SBOM generation. See here for more info: https://docs.aws.amazon.com/inspector/latest/user/sbom-generator.html"
required: False
default: "latest"
critical_threshold:
description: "Specifies the number of critical vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag."
required: False
default: 0
high_threshold:
description: "Specifies the number of high vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag."
required: False
default: 0
medium_threshold:
description: "Specifies the number of medium vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag."
required: False
default: 0
low_threshold:
description: "Specifies the number of low vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag."
required: False
default: 0
other_threshold:
description: "Specifies the number of other vulnerabilities needed to set the 'vulnerability_threshold_exceeded' flag."
required: False
default: 0
scanners:
description: "Specifies the file scanners that you would like inspector-sbomgen to execute. By default, inspector-sbomgen will try to run all file scanners that are applicable to the target artifact. If this argument is set, inspector-sbomgen will only execute the specified file scanners. Provide your input as a single string. Separate each file scanner with a comma. To view a list of available file scanners, execute 'inspector-sbomgen list-scanners'. See here for more info: https://docs.aws.amazon.com/inspector/latest/user/sbom-generator.html"
required: False
default: "''"
# Example:
# scanners: "dpkg,python-requirements,javascript-npm-packagelock"
skip_scanners:
description: "Specifies a list of file scanners that should NOT be executed; this argument cannot be combined with 'scanners'. If this argument is set, inspector-sbomgen will execute all file scanners except those you specified. Provide your input as a single string. Separate each file scanner with a comma. To view a list of available file scanners, execute 'inspector-sbomgen list-scanners'. See here for more info: https://docs.aws.amazon.com/inspector/latest/user/sbom-generator.html"
required: False
default: "''"
# Example:
# skip_scanners: "binaries,alpine-apk,dpkg,php"
skip_files:
description: "Specifies one or more files and/or directories that should NOT be inventoried. Separate each file with a comma and enclose the entire string in double quotes."
required: False
default: "''"
# Example:
# skip_files: "./media,/tmp/foo/,/bar/my_program"
timeout:
description: "Specifies a timeout in seconds. If this timeout is exceeded, the action will gracefully conclude and present any findings discovered up to that point."
required: False
default: 600 # 10 minutes
platform:
description: "Specifies the OS and CPU arch of the container image you wish to scan. Valid inputs are of the form 'os/cpu/variant' for example, 'linux/amd64', 'linux/arm64/v8', etc. If no platform is specified, the system will use the same platform as the host that is performing the scan. This argument only affects container image scans. Requires inspector-sbomgen 1.5.1 or later."
required: False
threshold_fixable_only:
description: 'If set to true, only count vulnerabilities with a fix towards threshold exceeded condition.'
required: False
default: false
type: boolean
show_only_fixable_vulns:
description: "If set to true, this action will show only fixed vulnerabilities in the GitHub Actions step summary page. All vulnerability metadata is still retained in the raw Inspector scan files."
required: False
default: false
type: boolean
outputs:
artifact_sbom:
description: "The filepath to the artifact's software bill of materials."
inspector_scan_results:
description: "The file path to the Inspector vulnerability scan findings in JSON format."
inspector_scan_results_csv:
description: "The file path to the Inspector vulnerability scan findings in CSV format."
inspector_scan_results_markdown:
description: "The file path to the Inspector vulnerability scan findings in markdown format."
inspector_dockerile_scan_results_csv:
description: "The file path to the Inspector Dockerfile vulnerability scan findings in CSV format."
inspector_dockerile_scan_results_markdown:
description: "The file path to the Inspector Dockerfile vulnerability scan findings in markdown format."
vulnerability_threshold_exceeded:
description: "This variable is set to 1 if any vulnerability threshold was exceeded, otherwise it is 0. This variable can be used to trigger custom logic, such as failing the job if vulnerabilities were detected."
runs:
using: 'docker'
image: 'Dockerfile'
args:
- --artifact-type=${{ inputs.artifact_type }}
- --artifact-path=${{ inputs.artifact_path }}
- --display-vuln-findings=${{ inputs.display_vulnerability_findings }}
- --out-sbom=${{ inputs.output_sbom_path}}
- --out-scan=${{ inputs.output_inspector_scan_path }}
- --out-scan-csv=${{ inputs.output_inspector_scan_path_csv }}
- --out-scan-markdown=${{ inputs.output_inspector_scan_path_markdown }}
- --out-dockerfile-scan-csv=${{ inputs.output_inspector_dockerfile_scan_path_csv }}
- --out-dockerfile-scan-md=${{ inputs.output_inspector_dockerfile_scan_path_markdown }}
- --sbomgen-version=${{ inputs.sbomgen_version }}
- --thresholds
- ${{ inputs.threshold_fixable_only == 'true' && '--threshold-fixable-only' || '--no-op' }}
- ${{ inputs.show_only_fixable_vulns == 'true' && '--show-only-fixable-vulns'|| '--no-op' }}
- --platform=${{ inputs.platform || '' }}
- --critical=${{ inputs.critical_threshold }}
- --high=${{ inputs.high_threshold }}
- --medium=${{ inputs.medium_threshold }}
- --low=${{ inputs.low_threshold }}
- --other=${{ inputs.other_threshold }}
- --scanners=${{ inputs.scanners }}
- --skip-scanners=${{ inputs.skip_scanners }}
- --skip-files=${{ inputs.skip_files }}
- --timeout=${{ inputs.timeout }}
Action ID: marketplace/docker/build-push-action
Author: docker
Publisher: docker
Repository: github.com/docker/build-push-action
Build and push Docker images with Buildx
| Name | Required | Description |
|---|---|---|
add-hosts |
Optional | List of a customs host-to-IP mapping (e.g., docker:10.180.0.1) |
allow |
Optional | List of extra privileged entitlement (e.g., network.host,security.insecure) |
annotations |
Optional | List of annotation to set to the image |
attests |
Optional | List of attestation parameters (e.g., type=sbom,generator=image) |
build-args |
Optional | List of build-time variables |
build-contexts |
Optional | List of additional build contexts (e.g., name=path) |
builder |
Optional | Builder instance |
cache-from |
Optional | List of external cache sources for buildx (e.g., user/app:cache, type=local,src=path/to/dir) |
cache-to |
Optional | List of cache export destinations for buildx (e.g., user/app:cache, type=local,dest=path/to/dir) |
call |
Optional | Set method for evaluating build (e.g., check) |
cgroup-parent |
Optional | Optional parent cgroup for the container used in the build |
context |
Optional | Build's context is the set of files located in the specified PATH or URL |
file |
Optional | Path to the Dockerfile |
labels |
Optional | List of metadata for an image |
load |
Optional | Load is a shorthand for --output=type=docker Default: false |
network |
Optional | Set the networking mode for the RUN instructions during build |
no-cache |
Optional | Do not use cache when building the image Default: false |
no-cache-filters |
Optional | Do not cache specified stages |
outputs |
Optional | List of output destinations (format: type=local,dest=path) |
platforms |
Optional | List of target platforms for build |
provenance |
Optional | Generate provenance attestation for the build (shorthand for --attest=type=provenance) |
pull |
Optional | Always attempt to pull all referenced images Default: false |
push |
Optional | Push is a shorthand for --output=type=registry Default: false |
sbom |
Optional | Generate SBOM attestation for the build (shorthand for --attest=type=sbom) |
secrets |
Optional | List of secrets to expose to the build (e.g., key=string, GIT_AUTH_TOKEN=mytoken) |
secret-envs |
Optional | List of secret env vars to expose to the build (e.g., key=envname, MY_SECRET=MY_ENV_VAR) |
secret-files |
Optional | List of secret files to expose to the build (e.g., key=filename, MY_SECRET=./secret.txt) |
shm-size |
Optional | Size of /dev/shm (e.g., 2g) |
ssh |
Optional | List of SSH agent socket or keys to expose to the build |
tags |
Optional | List of tags |
target |
Optional | Sets the target stage to build |
ulimit |
Optional | Ulimit options (e.g., nofile=1024:1024) |
github-token |
Optional | GitHub Token used to authenticate against a repository for Git context Default: ${{ github.token }} |
| Name | Description |
|---|---|
imageid |
Image ID |
digest |
Image digest |
metadata |
Build result metadata |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: Build and push Docker images
description: Build and push Docker images with Buildx
author: docker
branding:
icon: 'anchor'
color: 'blue'
inputs:
add-hosts:
description: "List of a customs host-to-IP mapping (e.g., docker:10.180.0.1)"
required: false
allow:
description: "List of extra privileged entitlement (e.g., network.host,security.insecure)"
required: false
annotations:
description: "List of annotation to set to the image"
required: false
attests:
description: "List of attestation parameters (e.g., type=sbom,generator=image)"
required: false
build-args:
description: "List of build-time variables"
required: false
build-contexts:
description: "List of additional build contexts (e.g., name=path)"
required: false
builder:
description: "Builder instance"
required: false
cache-from:
description: "List of external cache sources for buildx (e.g., user/app:cache, type=local,src=path/to/dir)"
required: false
cache-to:
description: "List of cache export destinations for buildx (e.g., user/app:cache, type=local,dest=path/to/dir)"
required: false
call:
description: "Set method for evaluating build (e.g., check)"
required: false
cgroup-parent:
description: "Optional parent cgroup for the container used in the build"
required: false
context:
description: "Build's context is the set of files located in the specified PATH or URL"
required: false
file:
description: "Path to the Dockerfile"
required: false
labels:
description: "List of metadata for an image"
required: false
load:
description: "Load is a shorthand for --output=type=docker"
required: false
default: 'false'
network:
description: "Set the networking mode for the RUN instructions during build"
required: false
no-cache:
description: "Do not use cache when building the image"
required: false
default: 'false'
no-cache-filters:
description: "Do not cache specified stages"
required: false
outputs:
description: "List of output destinations (format: type=local,dest=path)"
required: false
platforms:
description: "List of target platforms for build"
required: false
provenance:
description: "Generate provenance attestation for the build (shorthand for --attest=type=provenance)"
required: false
pull:
description: "Always attempt to pull all referenced images"
required: false
default: 'false'
push:
description: "Push is a shorthand for --output=type=registry"
required: false
default: 'false'
sbom:
description: "Generate SBOM attestation for the build (shorthand for --attest=type=sbom)"
required: false
secrets:
description: "List of secrets to expose to the build (e.g., key=string, GIT_AUTH_TOKEN=mytoken)"
required: false
secret-envs:
description: "List of secret env vars to expose to the build (e.g., key=envname, MY_SECRET=MY_ENV_VAR)"
required: false
secret-files:
description: "List of secret files to expose to the build (e.g., key=filename, MY_SECRET=./secret.txt)"
required: false
shm-size:
description: "Size of /dev/shm (e.g., 2g)"
required: false
ssh:
description: "List of SSH agent socket or keys to expose to the build"
required: false
tags:
description: "List of tags"
required: false
target:
description: "Sets the target stage to build"
required: false
ulimit:
description: "Ulimit options (e.g., nofile=1024:1024)"
required: false
github-token:
description: "GitHub Token used to authenticate against a repository for Git context"
default: ${{ github.token }}
required: false
outputs:
imageid:
description: 'Image ID'
digest:
description: 'Image digest'
metadata:
description: 'Build result metadata'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/google-github-actions/upload-cloud-storage
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/upload-cloud-storage
Upload files or folders to GCS buckets
| Name | Required | Description |
|---|---|---|
project_id |
Optional | Google Cloud project ID to use for billing and API requests. If not provided, the project will be inferred from the environment, best-effort. To explicitly set the value: ```yaml project_id: 'my-project' ``` |
universe |
Optional | The Google Cloud universe to use for constructing API endpoints. Trusted
Partner Cloud and Google Distributed Hosted Cloud should set this to their
universe address.
You can also override individual API endpoints by setting the environment
variable `GHA_ENDPOINT_OVERRIDE_<endpoint>` where `<endpoint>` is the API
endpoint to override. For example:
```yaml
env:
GHA_ENDPOINT_OVERRIDE_oauth2: 'https://oauth2.myapi.endpoint/v1'
```
For more information about universes, see the Google Cloud documentation. Default: googleapis.com |
path |
Required | The path to a file or folder inside the action's filesystem that should be uploaded to the bucket. You can specify either the absolute path or the relative path from the action: ```yaml path: '/path/to/file' ``` ```yaml path: '../path/to/file' ``` |
destination |
Required | The destination for the file/folder in the form bucket-name or with an optional prefix in the form `bucket-name/prefix`. For example, to upload a file named `file` to the GCS bucket `bucket-name`: ```yaml destination: 'bucket-name' ``` To upload to a subfolder: ```yaml destination: 'bucket-name/prefix' ``` |
gzip |
Optional | Upload file(s) with gzip content encoding. To disable gzip
content-encoding, set the value to false:
```yaml
gzip: false
``` Default: True |
resumable |
Optional | Enable resumable uploads. To disable resumable uploads, set the value to
false:
```yaml
resumable: false
``` Default: True |
predefinedAcl |
Optional | Apply a predefined set of access controls to the files being uploaded. For example, to grant project team members access to the uploaded files according to their roles: ```yaml predefinedAcl: 'projectPrivate' ``` Acceptable values are: `authenticatedRead`, `bucketOwnerFullControl`, `bucketOwnerRead`, `private`, `projectPrivate`, `publicRead`. See [the document](https://googleapis.dev/nodejs/storage/latest/global.html#UploadOptions) for details. |
headers |
Optional | Set object metadata. For example, to set the `Content-Type` header to `application/json` and custom metadata with key `custom-field` and value `custom-value`: ```yaml headers: |- content-type: 'application/json' x-goog-meta-custom-field: 'custom-value' ``` Settable fields are: `cache-control`, `content-disposition`, `content-encoding`, `content-language`, `content-type`, `custom-time`. See [the document](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata#settable-fields;-field-values) for details. All custom metadata fields must be prefixed with `x-goog-meta-`. |
parent |
Optional | Whether the parent directory should be included in GCS destination path. To disable this:
```yaml
parent: false
``` Default: True |
glob |
Optional | Glob pattern to match for files to upload. ```yaml glob: '*.txt' ``` |
concurrency |
Optional | Number of files to simultaneously upload.
```yaml
concurrency: '10'
``` Default: 100 |
gcloudignore_path |
Optional | Path to a gcloudignore file within the repository.
```yaml
gcloudignore_path: '.gcloudignore.dev'
``` Default: .gcloudignore |
process_gcloudignore |
Optional | Process a `.gcloudignore` file present in the top-level of the repository.
If true, the file is parsed and any filepaths that match are not uploaded
to the storage bucket. To disable, set the value to false:
```yaml
process_gcloudignore: false
``` Default: True |
| Name | Description |
|---|---|
uploaded |
Comma-separated list of files that were uploaded. |
name: 'Cloud Storage Uploader'
description: 'Upload files or folders to GCS buckets'
author: 'Google LLC'
inputs:
#
# Google Cloud
# ------------
project_id:
description: |-
Google Cloud project ID to use for billing and API requests. If not
provided, the project will be inferred from the environment, best-effort.
To explicitly set the value:
```yaml
project_id: 'my-project'
```
required: false
universe:
description: |-
The Google Cloud universe to use for constructing API endpoints. Trusted
Partner Cloud and Google Distributed Hosted Cloud should set this to their
universe address.
You can also override individual API endpoints by setting the environment
variable `GHA_ENDPOINT_OVERRIDE_<endpoint>` where `<endpoint>` is the API
endpoint to override. For example:
```yaml
env:
GHA_ENDPOINT_OVERRIDE_oauth2: 'https://oauth2.myapi.endpoint/v1'
```
For more information about universes, see the Google Cloud documentation.
default: 'googleapis.com'
required: false
#
# GCS
# ------------
path:
description: |-
The path to a file or folder inside the action's filesystem that should be
uploaded to the bucket.
You can specify either the absolute path or the relative path from the
action:
```yaml
path: '/path/to/file'
```
```yaml
path: '../path/to/file'
```
required: true
destination:
description: |-
The destination for the file/folder in the form bucket-name or with an
optional prefix in the form `bucket-name/prefix`. For example, to upload a
file named `file` to the GCS bucket `bucket-name`:
```yaml
destination: 'bucket-name'
```
To upload to a subfolder:
```yaml
destination: 'bucket-name/prefix'
```
required: true
gzip:
description: |-
Upload file(s) with gzip content encoding. To disable gzip
content-encoding, set the value to false:
```yaml
gzip: false
```
required: false
default: true
resumable:
description: |-
Enable resumable uploads. To disable resumable uploads, set the value to
false:
```yaml
resumable: false
```
required: false
default: true
predefinedAcl:
description: |-
Apply a predefined set of access controls to the files being uploaded. For
example, to grant project team members access to the uploaded files
according to their roles:
```yaml
predefinedAcl: 'projectPrivate'
```
Acceptable values are: `authenticatedRead`, `bucketOwnerFullControl`,
`bucketOwnerRead`, `private`, `projectPrivate`, `publicRead`. See [the
document](https://googleapis.dev/nodejs/storage/latest/global.html#UploadOptions)
for details.
required: false
headers:
description: |-
Set object metadata. For example, to set the `Content-Type` header to
`application/json` and custom metadata with key `custom-field` and value
`custom-value`:
```yaml
headers: |-
content-type: 'application/json'
x-goog-meta-custom-field: 'custom-value'
```
Settable fields are: `cache-control`, `content-disposition`,
`content-encoding`, `content-language`, `content-type`, `custom-time`. See
[the
document](https://cloud.google.com/storage/docs/gsutil/addlhelp/WorkingWithObjectMetadata#settable-fields;-field-values)
for details. All custom metadata fields must be prefixed with
`x-goog-meta-`.
required: false
parent:
description: |-
Whether the parent directory should be included in GCS destination path. To disable this:
```yaml
parent: false
```
required: false
default: true
glob:
description: |-
Glob pattern to match for files to upload.
```yaml
glob: '*.txt'
```
required: false
concurrency:
description: |-
Number of files to simultaneously upload.
```yaml
concurrency: '10'
```
required: false
default: '100'
gcloudignore_path:
description: |-
Path to a gcloudignore file within the repository.
```yaml
gcloudignore_path: '.gcloudignore.dev'
```
required: false
default: '.gcloudignore'
process_gcloudignore:
description: |-
Process a `.gcloudignore` file present in the top-level of the repository.
If true, the file is parsed and any filepaths that match are not uploaded
to the storage bucket. To disable, set the value to false:
```yaml
process_gcloudignore: false
```
required: false
default: true
outputs:
uploaded:
description: |-
Comma-separated list of files that were uploaded.
branding:
icon: 'upload-cloud'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/azure/aci-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/aci-deploy
Deploy to Containers to Azure Container Instances. github.com/Azure/Actions
| Name | Required | Description |
|---|---|---|
resource-group |
Required | Name of the Resource Group in which the Container Instance will be created |
azure-file-volume-account-key |
Optional | The storage account access key used to access the Azure File Share |
azure-file-volume-account-name |
Optional | The name of the storage account that contains the Azure File Share |
azure-file-volume-mount-path |
Optional | The path within the container where the Azure File Volume should be mounted. Must not contain ":" |
azure-file-volume-share-name |
Optional | The name of the Azure File Share to be mounted as a volume |
azure-file-volume-read-only |
Optional | Should the Azure File Share be Mounted as Read Only. Accepted { true, false } |
command-line |
Optional | The command line to run when the container is started, e.g. "/bin/bash -c myscript.sh" |
cpu |
Optional | Number of CPU Cores Required Default: 1 |
dns-name-label |
Required | The DNS Name Label for Container with Public IP |
environment-variables |
Optional | List of environment variables for the container. Space-seperated in "key=value" format |
gitrepo-dir |
Optional | The target directory path in the git repository. Must not contain ".." |
gitrepo-mount-path |
Optional | The path within the container where the git repo volume should be mounted. Must not contain ":" |
gitrepo-revision |
Optional | The commit hash for the specified revision |
gitrepo-url |
Optional | The URL of a git repository to be mounted as a volume |
gpu-count |
Optional | The Number of GPU Resources needed in the Container |
gpu-sku |
Optional | The SKU for the GPUs specified. Accepted Values are { K80, P100, V100 } |
image |
Required | Specify the fully qualified container image name. For example, "myregistry.azurecr.io/nginx:latest" or "python:3.7.2-alpine/" |
ip-address |
Optional | IP Address type of the Container Group. Accepted Values are { Private, Public }.Currently it only supports { Public } Default: Public |
location |
Required | Location where the Container will be deployed |
log-analytics-workspace |
Optional | The Log Analytics Workspace Name or Id |
log-analytics-workspace-key |
Optional | The Log Analytics Workspace Key |
log-type |
Optional | The Log type to be used. Accepted Values are { ContainerInsights, ContainerInstanceLogs } |
memory |
Optional | Required Memory of the Containers in GB, accurate to one decimal place Default: 1.5 |
name |
Required | Name of the Container Group Instance |
os-type |
Optional | The OS type of the Containers. Accepted Values are { Linux, Windows } Default: Linux |
ports |
Optional | The Ports to Open on the Container. Space seperate the ports for multiple values Default: 80 |
protocol |
Optional | The Network protocol to use. Accepted Values are { TCP, UDP } Default: TCP |
registry-login-server |
Optional | The container image registry login server |
registry-username |
Optional | Username to log in Container Image Registry Server |
registry-password |
Optional | Password to log in Container Image Registry Server |
restart-policy |
Optional | Restart Policy for the container(s). Accepted Values are { Always, OnFailure, Never } Default: Always |
secure-environment-variables |
Optional | List of secure environment variables for the container. Space seperated values in "key=value" format |
| Name | Description |
|---|---|
app-url |
URL of the Deployed Application |
name: 'Deploy to Azure Container Instances'
description: 'Deploy to Containers to Azure Container Instances. github.com/Azure/Actions'
inputs:
resource-group:
description: 'Name of the Resource Group in which the Container Instance will be created'
required: true
azure-file-volume-account-key:
description: 'The storage account access key used to access the Azure File Share'
required: false
default: ''
azure-file-volume-account-name:
description: 'The name of the storage account that contains the Azure File Share'
required: false
default: ''
azure-file-volume-mount-path:
description: 'The path within the container where the Azure File Volume should be mounted. Must not contain ":"'
required: false
default: ''
azure-file-volume-share-name:
description: 'The name of the Azure File Share to be mounted as a volume'
required: false
default: ''
azure-file-volume-read-only:
description: 'Should the Azure File Share be Mounted as Read Only. Accepted { true, false }'
required: false
default: ''
command-line:
description: 'The command line to run when the container is started, e.g. "/bin/bash -c myscript.sh"'
required: false
default: ''
cpu:
description: 'Number of CPU Cores Required'
required: false
default: 1
dns-name-label:
description: 'The DNS Name Label for Container with Public IP'
required: true
environment-variables:
description: 'List of environment variables for the container. Space-seperated in "key=value" format'
required: false
default: ''
gitrepo-dir:
description: 'The target directory path in the git repository. Must not contain ".."'
required: false
default: ''
gitrepo-mount-path:
description: 'The path within the container where the git repo volume should be mounted. Must not contain ":"'
required: false
default: ''
gitrepo-revision:
description: 'The commit hash for the specified revision'
required: false
default: ''
gitrepo-url:
description: 'The URL of a git repository to be mounted as a volume'
required: false
default: ''
gpu-count:
description: 'The Number of GPU Resources needed in the Container'
required: false
default: ''
gpu-sku:
description: 'The SKU for the GPUs specified. Accepted Values are { K80, P100, V100 }'
default: ''
image:
description: 'Specify the fully qualified container image name. For example, "myregistry.azurecr.io/nginx:latest" or "python:3.7.2-alpine/"'
required: true
ip-address:
description: 'IP Address type of the Container Group. Accepted Values are { Private, Public }.Currently it only supports { Public }'
required: false
default: 'Public'
location:
description: 'Location where the Container will be deployed'
required: true
log-analytics-workspace:
description: 'The Log Analytics Workspace Name or Id'
required: false
log-analytics-workspace-key:
description: 'The Log Analytics Workspace Key'
required: false
log-type:
description: 'The Log type to be used. Accepted Values are { ContainerInsights, ContainerInstanceLogs }'
required: false
memory:
description: 'Required Memory of the Containers in GB, accurate to one decimal place'
required: false
default: 1.5
name:
description: 'Name of the Container Group Instance'
required: true
os-type:
description: 'The OS type of the Containers. Accepted Values are { Linux, Windows }'
required: false
default: 'Linux'
ports:
description: 'The Ports to Open on the Container. Space seperate the ports for multiple values'
required: false
default: '80'
protocol:
description: 'The Network protocol to use. Accepted Values are { TCP, UDP }'
required: false
default: 'TCP'
registry-login-server:
description: 'The container image registry login server'
required: false
default: ''
registry-username:
description: 'Username to log in Container Image Registry Server'
required: false
default: ''
registry-password:
description: 'Password to log in Container Image Registry Server'
required: false
default: ''
restart-policy:
description: 'Restart Policy for the container(s). Accepted Values are { Always, OnFailure, Never }'
required: false
default: 'Always'
secure-environment-variables:
description: 'List of secure environment variables for the container. Space seperated values in "key=value" format'
default: ''
outputs:
app-url:
description: 'URL of the Deployed Application'
branding:
icon: 'azure-logo.svg'
color: 'blue'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/actions/attest
Author: GitHub
Publisher: actions
Repository: github.com/actions/attest
Generate attestations for build artifacts
| Name | Required | Description |
|---|---|---|
subject-path |
Optional | Path to the artifact serving as the subject of the attestation. Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". May contain a glob pattern or list of paths (total subject count cannot exceed 1024). |
subject-digest |
Optional | Digest of the subject for the attestation. Must be in the form "algorithm:hex_digest" (e.g. "sha256:abc123..."). Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". |
subject-name |
Optional | Subject name as it should appear in the attestation. Required when identifying the subject with the "subject-digest" input. |
subject-checksums |
Optional | Path to checksums file containing digest and name of subjects for attestation. Must specify exactly one of "subject-path", "subject-digest", or "subject-checksums". |
predicate-type |
Required | URI identifying the type of the predicate. |
predicate |
Optional | String containing the value for the attestation predicate. String length cannot exceed 16MB. Must supply exactly one of "predicate-path" or "predicate". |
predicate-path |
Optional | Path to the file which contains the content for the attestation predicate. File size cannot exceed 16MB. Must supply exactly one of "predicate-path" or "predicate". |
push-to-registry |
Optional | Whether to push the attestation to the image registry. Requires that the "subject-name" parameter specify the fully-qualified image name and that the "subject-digest" parameter be specified. Defaults to false. |
show-summary |
Optional | Whether to attach a list of generated attestations to the workflow run summary page. Defaults to true.
Default: True |
github-token |
Optional | The GitHub token used to make authenticated API requests.
Default: ${{ github.token }} |
| Name | Description |
|---|---|
bundle-path |
The path to the file containing the attestation bundle. |
attestation-id |
The ID of the attestation. |
attestation-url |
The URL for the attestation summary. |
name: 'Generate Generic Attestations'
description: 'Generate attestations for build artifacts'
author: 'GitHub'
branding:
color: 'blue'
icon: 'link'
inputs:
subject-path:
description: >
Path to the artifact serving as the subject of the attestation. Must
specify exactly one of "subject-path", "subject-digest", or
"subject-checksums". May contain a glob pattern or list of paths (total
subject count cannot exceed 1024).
required: false
subject-digest:
description: >
Digest of the subject for the attestation. Must be in the form
"algorithm:hex_digest" (e.g. "sha256:abc123..."). Must specify exactly one
of "subject-path", "subject-digest", or "subject-checksums".
required: false
subject-name:
description: >
Subject name as it should appear in the attestation. Required when
identifying the subject with the "subject-digest" input.
required: false
subject-checksums:
description: >
Path to checksums file containing digest and name of subjects for
attestation. Must specify exactly one of "subject-path", "subject-digest",
or "subject-checksums".
required: false
predicate-type:
description: >
URI identifying the type of the predicate.
required: true
predicate:
description: >
String containing the value for the attestation predicate. String length
cannot exceed 16MB. Must supply exactly one of "predicate-path" or
"predicate".
required: false
predicate-path:
description: >
Path to the file which contains the content for the attestation predicate.
File size cannot exceed 16MB. Must supply exactly one of "predicate-path"
or "predicate".
required: false
push-to-registry:
description: >
Whether to push the attestation to the image registry. Requires that the
"subject-name" parameter specify the fully-qualified image name and that
the "subject-digest" parameter be specified. Defaults to false.
default: false
required: false
show-summary:
description: >
Whether to attach a list of generated attestations to the workflow run
summary page. Defaults to true.
default: true
required: false
github-token:
description: >
The GitHub token used to make authenticated API requests.
default: ${{ github.token }}
required: false
outputs:
bundle-path:
description: 'The path to the file containing the attestation bundle.'
attestation-id:
description: 'The ID of the attestation.'
attestation-url:
description: 'The URL for the attestation summary.'
runs:
using: node24
main: ./dist/index.js
Action ID: marketplace/google-github-actions/ssh-compute
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/ssh-compute
Use this action to connect to Google Cloud Platform instances via ssh and execute user's commands.
| Name | Required | Description |
|---|---|---|
instance_name |
Required | Name of the virtual machine instance to SSH into. |
zone |
Required | Zone of the instance to connect to. |
user |
Optional | Specifies the username with which to SSH. If omitted, the user login name is used. If using OS Login, USER will be replaced by the OS Login user. |
ssh_private_key |
Required | SSH private key with which to SSH. |
ssh_keys_dir |
Optional | Path for a directory to store ssh keys. Random directory in the temp folder will be generated by default. |
container |
Optional | The name or ID of a container inside of the virtual machine instance to connect to. This only applies to virtual machines that are using a Google Container-Optimized virtual machine image. |
ssh_args |
Optional | Additional flags to be passed to ssh tool. Example: '-vvv -L 80:%INSTANCE%:80'. |
command |
Optional | A command to run on the virtual machine. Action runs the command on the target instance and then exits. You must specify at least command or script, specifying both command and script is invalid. |
script |
Optional | A path for a bash script to run on the virtual machine. Action runs the file on the target instance and then exits. You must specify at least command or script, specifying both command and script is invalid. |
project_id |
Optional | The GCP project ID. Overrides project ID set by credentials. |
flags |
Optional | Space separated list of other compute ssh flags, examples can be found: https://cloud.google.com/sdk/gcloud/reference/compute/ssh/#FLAGS. Ex --ssh-key-expiration=2017-08-29T18:52:51.142Z. |
gcloud_version |
Optional | Version of the Cloud SDK to install. If unspecified or set to "latest", the latest available gcloud SDK version for the target platform will be installed. Example: "290.0.1". |
gcloud_component |
Optional | Version of the Cloud SDK components to install and use. If unspecified, the latest or released version will be used. This is the equivalent of running 'gcloud alpha run' or 'gcloud beta run'. Valid values are `alpha` or `beta`. |
| Name | Description |
|---|---|
stdout |
Stdout from ssh command. |
stderr |
Stderr from ssh command. |
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'SSH to Google Cloud Platform compute instances'
author: 'Google LLC'
description: |-
Use this action to connect to Google Cloud Platform instances via ssh and
execute user's commands.
inputs:
instance_name:
description: |-
Name of the virtual machine instance to SSH into.
required: true
zone:
description: |-
Zone of the instance to connect to.
required: true
user:
description: |-
Specifies the username with which to SSH.
If omitted, the user login name is used. If using OS Login, USER will be replaced by the OS Login user.
required: false
ssh_private_key:
description: |-
SSH private key with which to SSH.
required: true
ssh_keys_dir:
description: |-
Path for a directory to store ssh keys. Random directory in the temp folder will be generated by default.
required: false
container:
description: |-
The name or ID of a container inside of the virtual machine instance to connect to.
This only applies to virtual machines that are using a Google Container-Optimized
virtual machine image.
required: false
ssh_args:
description: |-
Additional flags to be passed to ssh tool. Example: '-vvv -L 80:%INSTANCE%:80'.
required: false
command:
description: |-
A command to run on the virtual machine.
Action runs the command on the target instance and then exits.
You must specify at least command or script, specifying both command and script is invalid.
required: false
script:
description: |-
A path for a bash script to run on the virtual machine.
Action runs the file on the target instance and then exits.
You must specify at least command or script, specifying both command and script is invalid.
required: false
project_id:
description: |-
The GCP project ID. Overrides project ID set by credentials.
required: false
flags:
description: |-
Space separated list of other compute ssh flags, examples can be found:
https://cloud.google.com/sdk/gcloud/reference/compute/ssh/#FLAGS. Ex
--ssh-key-expiration=2017-08-29T18:52:51.142Z.
required: false
gcloud_version:
description: |-
Version of the Cloud SDK to install. If unspecified or set to "latest",
the latest available gcloud SDK version for the target platform will be
installed. Example: "290.0.1".
required: false
gcloud_component:
description: |-
Version of the Cloud SDK components to install and use. If unspecified, the latest
or released version will be used. This is the equivalent of running
'gcloud alpha run' or 'gcloud beta run'. Valid values are `alpha` or `beta`.
required: false
outputs:
stdout:
description: |-
Stdout from ssh command.
stderr:
description: |-
Stderr from ssh command.
branding:
icon: 'terminal'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
post: 'dist/post/index.js'
Action ID: marketplace/amirisback/nf-testcase-app-consume-library
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/nf-testcase-app-consume-library
Final Project for Bachelor Degree, Full and Clear Documentation, Have Empty View :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Nutrition Framework'
description: 'Final Project for Bachelor Degree, Full and Clear Documentation, Have Empty View :books:'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/appleboy/facebook-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/facebook-action
Sending a Facebook message
| Name | Required | Description |
|---|---|---|
fb_page_token |
Optional | Token is the access token of the Facebook page to send messages from.. |
fb_verify_token |
Optional | The token used to verify facebook. |
to |
Optional | Recipient is who the message was sent to (required). |
app_secret |
Optional | The app secret from the facebook developer portal. |
verify |
Optional | verifying webhooks on the Facebook Developer Portal. |
images |
Optional | send the image message. Ex: https://golang.org/doc/gopher/gophercolor.png |
audios |
Optional | send the audio message. |
files |
Optional | send the file message. |
videos |
Optional | send the video message. |
name: 'Facebook Message Notify'
description: 'Sending a Facebook message'
author: 'Bo-Yi Wu'
inputs:
fb_page_token:
description: 'Token is the access token of the Facebook page to send messages from..'
fb_verify_token:
description: 'The token used to verify facebook.'
to:
description: 'Recipient is who the message was sent to (required).'
app_secret:
description: 'The app secret from the facebook developer portal.'
verify:
description: 'verifying webhooks on the Facebook Developer Portal.'
images:
description: 'send the image message. Ex: https://golang.org/doc/gopher/gophercolor.png'
audios:
description: 'send the audio message.'
files:
description: 'send the file message.'
videos:
description: 'send the video message.'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'message-square'
color: 'blue'
Action ID: marketplace/azure/WordPress-Azure-VirtualMachine
Author: Unknown
Publisher: azure
Repository: github.com/azure/WordPress-Azure-VirtualMachine
This action helps create an Azure Virtual Machine and deploy Wordpress on it
| Name | Required | Description |
|---|---|---|
client-id |
Required | Client id to login to azure |
tenant-id |
Required | Tenant id to login to azure |
subscription-id |
Required | Subscription id to be used with your az login |
resource-group-name |
Required | Resource group to deploy your resources to |
admin-username |
Required | Admin username to login to app |
admin-password |
Required | Admin password to login to app and mySql |
name: 'Deploy WordPress on Azure Virtual Machine - Azure Quickstart Template'
description: 'This action helps create an Azure Virtual Machine and deploy Wordpress on it'
branding:
icon: 'play-circle'
color: 'blue'
inputs:
client-id:
description: 'Client id to login to azure'
required: true
tenant-id:
description: 'Tenant id to login to azure'
required: true
subscription-id:
description: 'Subscription id to be used with your az login'
required: true
resource-group-name:
description: 'Resource group to deploy your resources to'
required: true
admin-username:
description: 'Admin username to login to app'
required: true
admin-password:
description: 'Admin password to login to app and mySql'
required: true
runs:
using: 'composite'
steps:
- name: 'Checkout master'
uses: actions/checkout@v3
- name: az cli login
uses: azure/login@v1
with:
client-id: ${{ inputs.client-id }}
tenant-id: ${{ inputs.tenant-id }}
subscription-id: ${{ inputs.subscription-id }}
enable-AzPSSession: true
- name: 'Az deploy - Wordpress on VM'
uses: azure/arm-deploy@v1
with:
subscriptionId: ${{ inputs.subscription-id }}
resourceGroupName: ${{ inputs.resource-group-name }}
template: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/application-workloads/wordpress/wordpress-single-vm-ubuntu/azuredeploy.json
parameters: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/application-workloads/wordpress/wordpress-single-vm-ubuntu/azuredeploy.parameters.json
authenticationType=password
mySqlPassword=${{ inputs.admin-password }}
vmDnsName=dns-vm-2
vmSize=Standard_D2as_v4
adminUsername=${{ inputs.admin-username }}
adminPasswordOrKey=${{inputs.admin-password}}
failOnStdErr: false
- name: Fetch deployment record - Run Azure PowerShell inline script
uses: azure/powershell@v1
with:
inlineScript: |
Get-AzResourceGroupDeploymentOperation -ResourceGroupName ${{ inputs.resource-group-name }} -DeploymentName "azuredeploy"
azPSVersion: "latest"
Action ID: marketplace/azure/powershell
Author: Unknown
Publisher: azure
Repository: github.com/azure/powershell
Automate your GitHub workflows using Azure PowerShell scripts.
| Name | Required | Description |
|---|---|---|
inlineScript |
Required | Specify the Az PowerShell script here. |
azPSVersion |
Required | Azure PS version to be used to execute the script, example: 1.8.0, 2.8.0, 3.4.0. To use the latest version, specify "latest". |
errorActionPreference |
Optional | Select the value of the ErrorActionPreference variable for executing the script. Options: stop, continue, silentlyContinue. Default is Stop. Default: Stop |
failOnStandardError |
Optional | If this is true, this task will fail if any errors are written to the error pipeline, or if any data is written to the Standard Error stream. Default: false |
githubToken |
Optional | Used to pull Az module from Azure/az-ps-module-versions. Since there's a default, this is typically not supplied by the user. Default: ${{ github.token }} |
# Azure PowerShell Action
name: 'Azure PowerShell Action'
description: 'Automate your GitHub workflows using Azure PowerShell scripts.'
inputs:
inlineScript:
description: 'Specify the Az PowerShell script here.'
required: true
azPSVersion:
description: 'Azure PS version to be used to execute the script, example: 1.8.0, 2.8.0, 3.4.0. To use the latest version, specify "latest".'
required: true
errorActionPreference:
description: 'Select the value of the ErrorActionPreference variable for executing the script. Options: stop, continue, silentlyContinue. Default is Stop.'
required: false
default: 'Stop'
failOnStandardError:
description: 'If this is true, this task will fail if any errors are written to the error pipeline, or if any data is written to the Standard Error stream.'
required: false
default: 'false'
githubToken:
description: Used to pull Az module from Azure/az-ps-module-versions. Since there's a default, this is typically not supplied by the user.
default: ${{ github.token }}
branding:
icon: 'log-in'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/DeLaGuardo/setup-clj-kondo
Author: DeLaGuardo
Publisher: DeLaGuardo
Repository: github.com/DeLaGuardo/setup-clj-kondo
Provision hosted runner with clj-kondo
| Name | Required | Description |
|---|---|---|
version |
Optional | desired version of clj-kondo to install |
name: 'Setup clj-kondo'
description: 'Provision hosted runner with clj-kondo'
author: 'DeLaGuardo'
branding:
icon: 'gift'
color: 'blue'
inputs:
version:
description: 'desired version of clj-kondo to install'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/codecov/basic-test-results
Author: Unknown
Publisher: codecov
Repository: github.com/codecov/basic-test-results
Aggregates and processes tests files in CI. Outputs the failed tests as a comment on the PR.
| Name | Required | Description |
|---|---|---|
github-token |
Required | 'The secrets.GITHUB_TOKEN to be able to interact with the current PR and comment on it.' |
directory |
Optional | Directory to search for test result reports. Will run test processing on all files in the directory. |
disable-search |
Optional | Disable search for test result files. This is helpful when specifying what files you want to run test report processing with the --file option. |
exclude |
Optional | Directories to exclude from search. |
file |
Optional | Path to the junit file to run test result processing. |
version |
Optional | Specify which version of the Codecov CLI should be used. Defaults to `latest`. |
name: 'Basic Test Results'
description: 'Aggregates and processes tests files in CI. Outputs the failed tests as a comment on the PR.'
inputs:
github-token:
description: |
'The secrets.GITHUB_TOKEN to be able to interact with the current PR and comment on it.'
required: true
directory:
description: 'Directory to search for test result reports. Will run test processing on all files in the directory.'
required: false
disable-search:
description: 'Disable search for test result files. This is helpful when specifying what files you want to run test report processing with the --file option.'
required: false
exclude:
description: 'Directories to exclude from search.'
required: false
file:
description: 'Path to the junit file to run test result processing.'
required: false
version:
description: 'Specify which version of the Codecov CLI should be used. Defaults to `latest`.'
required: false
runs:
using: "composite"
steps:
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install Codecov CLI
env:
VERSION: ${{ inputs.version }}
run: |
if [[ -z "${VERSION}" ]]; then
pip install codecov-cli
else
pip install codecov-cli==${VERSION}
fi
shell: bash
- name: Run Test Results Processor
if: ${{ !cancelled() }}
shell: bash
env:
GITHUB_TOKEN: ${{ inputs.github-token }}
TEST_DIR: ${{ inputs.directory }}
DISABLE_SEARCH: ${{ inputs.disable-search }}
EXCLUDE: ${{ inputs.exclude }}
FILE: ${{ inputs.file }}
run: |
codecov_cli_args=()
if [[ ! -z "${TEST_DIR}" ]]; then
codecov_cli_args+=("--dir" ${TEST_DIR})
fi
if [[ ! -z "${DISABLE_SEARCH}" ]]; then
codecov_cli_args+=("--disable-search")
fi
if [[ ! -z "${EXCLUDE}" ]]; then
codecov_cli_args+=("--exclude" ${EXCLUDE})
fi
if [[ ! -z "${FILE}" ]]; then
codecov_cli_args+=("--file" ${FILE})
fi
codecovcli process-test-results --github-token ${GITHUB_TOKEN} ${codecov_cli_args[*]}
branding:
color: 'red'
icon: 'umbrella'
Action ID: marketplace/docker/scout-action
Author: Docker
Publisher: docker
Repository: github.com/docker/scout-action
List vulnerabilities in images; find better base images and upload an image SBOM to Docker Scout
| Name | Required | Description |
|---|---|---|
command |
Required | Command(s) to run. Use a comma separated list to run several commands on the same set of parameters, for instance quickview,compare |
debug |
Optional | Debug |
verbose-debug |
Optional | Print more verbose debug messages |
summary |
Optional | Publish the output as GitHub Action summary Default: True |
organization |
Optional | Namespace of the Docker organization |
image |
Optional | Image to analyze |
platform |
Optional | Platform of the image to analyze |
ref |
Optional | Ref if needed |
to |
Optional | Image to compare to |
to-ref |
Optional | Ref of image to compare |
to-stream |
Optional | Compare to image in stream |
to-env |
Optional | Compare to image in environment |
to-latest |
Optional | Compare to latest pushed image |
stream |
Optional | Name of the stream to record the image |
environment |
Optional | Name of the environment to record the image |
hide-policies |
Optional | Hide policies from the output altogether |
ignore-base |
Optional | Ignore vulnerabilities from base image |
ignore-unchanged |
Optional | Filter out unchanged packages |
only-vex-affected |
Optional | Filter out CVEs that are marked not affected by a VEX statement |
vex-author |
Optional | List of VEX statement authors to accept |
vex-location |
Optional | File location of directory or file containing VEX statement |
only-fixed |
Optional | Filter to fixable CVEs |
only-unfixed |
Optional | Filter to unfixed CVEs |
only-severities |
Optional | Comma separated list of severities (critical, high, medium, low, unspecified) to filter CVEs by |
only-package-types |
Optional | Comma separated list of package types (like apk, deb, rpm, npm, pypi, golang, etc) |
only-cisa-kev |
Optional | Filter to CVEs listed in the CISA Known Exploited Vulnerabilities catalog |
exit-code |
Optional | Fail the action step if vulnerability changes are detected |
exit-on |
Optional | (compare only) Comma separated list of conditions to fail the action step if worsened, options are: vulnerability, policy |
sarif-file |
Optional | Write output to a SARIF file for further processing or upload into GitHub code scanning |
format |
Optional | Format of the SBOM to generate (json, list, spdx, cyclonedx) Default: json |
output |
Optional | Output file for the SBOM |
secrets |
Optional | Enable secret scanning as part of SBOM indexing |
tags |
Optional | List of tags to add to the attestation |
file |
Optional | File path to the attestation file |
predicate-type |
Optional | Predicate type of the attestation |
referrer |
Optional | Use OCI referrer API for pushing attestation |
referrer-repository |
Optional | Repository to push referrer to |
registry-write-user |
Optional | Registry user to push attestations |
registry-write-password |
Optional | Registry password to push attestations |
dockerhub-user |
Optional | Docker Hub User |
dockerhub-password |
Optional | Docker Hub PAT |
registry-user |
Optional | Registry user to pull images |
registry-password |
Optional | Registry password to pull images |
github-token |
Optional | GitHub Token to write comments Default: ${{ github.token }} |
write-comment |
Optional | Write the output as a Pull Request comment Default: True |
keep-previous-comments |
Optional | If set, keep but hide previous comment. If not set, keep and update one single comment per job |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: Docker Scout
description: List vulnerabilities in images; find better base images and upload an image SBOM to Docker Scout
author: Docker
inputs:
command:
required: true
description: |
Command(s) to run.
Use a comma separated list to run several commands on the same set of parameters, for instance quickview,compare
debug:
required: false
description: Debug
verbose-debug:
required: false
description: Print more verbose debug messages
summary:
required: false
description: Publish the output as GitHub Action summary
default: true
organization:
required: false
description: Namespace of the Docker organization
image:
required: false
description: Image to analyze
platform:
required: false
description: Platform of the image to analyze
ref:
required: false
description: Ref if needed
# compare flags
to:
required: false
description: Image to compare to
to-ref:
required: false
description: Ref of image to compare
to-stream:
required: false
description: Compare to image in stream
deprecationMessage: Use to-env instead
to-env:
required: false
description: Compare to image in environment
to-latest:
required: false
description: Compare to latest pushed image
# stream/environment flags
stream:
required: false
description: Name of the stream to record the image
deprecationMessage: Use environment instead
environment:
required: false
description: Name of the environment to record the image
# policy flags
hide-policies:
required: false
description: Hide policies from the output altogether
# filter flags
ignore-base:
required: false
description: Ignore vulnerabilities from base image
ignore-unchanged:
required: false
description: Filter out unchanged packages
only-vex-affected:
required: false
description: Filter out CVEs that are marked not affected by a VEX statement
vex-author:
required: false
description: List of VEX statement authors to accept
vex-location:
required: false
description: File location of directory or file containing VEX statement
only-fixed:
required: false
description: Filter to fixable CVEs
only-unfixed:
required: false
description: Filter to unfixed CVEs
only-severities:
required: false
description: Comma separated list of severities (critical, high, medium, low, unspecified) to filter CVEs by
only-package-types:
required: false
description: Comma separated list of package types (like apk, deb, rpm, npm, pypi, golang, etc)
only-cisa-kev:
required: false
description: Filter to CVEs listed in the CISA Known Exploited Vulnerabilities catalog
exit-code:
required: false
description: Fail the action step if vulnerability changes are detected
exit-on:
required: false
description: "(compare only) Comma separated list of conditions to fail the action step if worsened, options are: vulnerability, policy"
sarif-file:
required: false
description: Write output to a SARIF file for further processing or upload into GitHub code scanning
# sbom flags
format:
required: false
description: Format of the SBOM to generate (json, list, spdx, cyclonedx)
default: json
output:
required: false
description: Output file for the SBOM
secrets:
required: false
description: Enable secret scanning as part of SBOM indexing
# attestation add flags
tags:
required: false
description: List of tags to add to the attestation
file:
required: false
description: File path to the attestation file
predicate-type:
required: false
description: Predicate type of the attestation
referrer:
required: false
description: Use OCI referrer API for pushing attestation
referrer-repository:
required: false
description: Repository to push referrer to
# credentials needed to push images
registry-write-user:
description: Registry user to push attestations
required: false
registry-write-password:
description: Registry password to push attestations
required: false
dockerhub-user:
required: false
description: Docker Hub User
dockerhub-password:
required: false
description: Docker Hub PAT
# credentials needed to pull private images
registry-user:
description: Registry user to pull images
required: false
registry-password:
description: Registry password to pull images
required: false
# comments
github-token:
description: GitHub Token to write comments
default: ${{ github.token }}
required: false
write-comment:
description: Write the output as a Pull Request comment
required: false
default: true
keep-previous-comments:
description: If set, keep but hide previous comment. If not set, keep and update one single comment per job
required: false
runs:
using: node20
main: index.js
branding:
icon: shield
color: gray-dark
Action ID: marketplace/azure/docker-login
Author: Unknown
Publisher: azure
Repository: github.com/azure/docker-login
Log in to Azure Container Registry (ACR) or any private container registry
| Name | Required | Description |
|---|---|---|
username |
Required | Container registry username |
password |
Required | Container registry password |
login-server |
Required | Container registry server url Default: https://index.docker.io/v1/ |
name: 'Azure Container Registry Login'
description: 'Log in to Azure Container Registry (ACR) or any private container registry'
inputs:
username:
description: 'Container registry username'
required: true
default: ''
password:
description: 'Container registry password'
required: true
default: ''
login-server:
description: 'Container registry server url'
required: true
default: 'https://index.docker.io/v1/'
branding:
color: 'green'
runs:
using: 'node20'
main: 'lib/login.js'
Action ID: marketplace/google-github-actions/setup-gcloud
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/setup-gcloud
Downloads, installs, and configures a Google Cloud SDK environment. Adds the `gcloud` CLI command to the $PATH.
| Name | Required | Description |
|---|---|---|
version |
Optional | A string representing the version or version constraint of the Cloud SDK
(`gcloud`) to install (e.g. `"290.0.1"` or `">= 197.0.1"`). The default
value is `"latest"`, which will always download and install the latest
available Cloud SDK version.
- uses: 'google-github-actions/setup-gcloud@v3'
with:
version: '>= 416.0.0'
If there is no installed `gcloud` version that matches the given
constraint, this GitHub Action will download and install the latest
available version that still matches the constraint.
Authenticating via Workload Identity Federation requires version
[363.0.0](https://cloud.google.com/sdk/docs/release-notes#36300_2021-11-02)
or newer. If you need support for Workload Identity Federation, specify
your version constraint as such:
- uses: 'google-github-actions/setup-gcloud@v3'
with:
version: '>= 363.0.0'
You are responsible for ensuring the `gcloud` version matches the features
and components required. Default: latest |
project_id |
Optional | ID of the Google Cloud project. If provided, this will configure gcloud to use this project ID by default for commands. Individual commands can still override the project using the `--project` flag which takes precedence. If unspecified, the action attempts to find the "best" project ID by looking at other inputs and environment variables. |
install_components |
Optional | List of additional [gcloud components](https://cloud.google.com/sdk/docs/components) to install, specified as a comma-separated list of strings: install_components: 'alpha,cloud-datastore-emulator' |
skip_install |
Optional | Skip installation of gcloud and use the [system-supplied version](https://github.com/actions/runner-images) instead. If specified, the `version` input is ignored. ⚠️ You will not be able to install additional gcloud components, because the system installation is locked. |
cache |
Optional | Transfer the downloaded artifacts into the runner's tool cache. On GitHub-managed runners, this have very little impact since runneres are ephemeral. On self-hosted runners, this could improve future runs by skipping future gcloud installations. |
| Name | Description |
|---|---|
version |
Version of gcloud that was installed. |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Set up gcloud Cloud SDK environment'
author: 'Google LLC'
description: |-
Downloads, installs, and configures a Google Cloud SDK environment.
Adds the `gcloud` CLI command to the $PATH.
inputs:
version:
description: |-
A string representing the version or version constraint of the Cloud SDK
(`gcloud`) to install (e.g. `"290.0.1"` or `">= 197.0.1"`). The default
value is `"latest"`, which will always download and install the latest
available Cloud SDK version.
- uses: 'google-github-actions/setup-gcloud@v3'
with:
version: '>= 416.0.0'
If there is no installed `gcloud` version that matches the given
constraint, this GitHub Action will download and install the latest
available version that still matches the constraint.
Authenticating via Workload Identity Federation requires version
[363.0.0](https://cloud.google.com/sdk/docs/release-notes#36300_2021-11-02)
or newer. If you need support for Workload Identity Federation, specify
your version constraint as such:
- uses: 'google-github-actions/setup-gcloud@v3'
with:
version: '>= 363.0.0'
You are responsible for ensuring the `gcloud` version matches the features
and components required.
default: 'latest'
required: false
project_id:
description: |-
ID of the Google Cloud project. If provided, this will configure gcloud to
use this project ID by default for commands. Individual commands can still
override the project using the `--project` flag which takes precedence. If
unspecified, the action attempts to find the "best" project ID by looking
at other inputs and environment variables.
required: false
install_components:
description: |-
List of additional [gcloud
components](https://cloud.google.com/sdk/docs/components) to install,
specified as a comma-separated list of strings:
install_components: 'alpha,cloud-datastore-emulator'
required: false
skip_install:
description: |-
Skip installation of gcloud and use the [system-supplied
version](https://github.com/actions/runner-images) instead. If specified,
the `version` input is ignored.
⚠️ You will not be able to install additional gcloud components, because the
system installation is locked.
default: false
required: false
cache:
description: |-
Transfer the downloaded artifacts into the runner's tool cache. On
GitHub-managed runners, this have very little impact since runneres are
ephemeral. On self-hosted runners, this could improve future runs by
skipping future gcloud installations.
default: false
required: false
outputs:
version:
description: |-
Version of gcloud that was installed.
branding:
icon: 'terminal'
color: 'blue'
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/dokku/github-action
Author: Dokku
Publisher: dokku
Repository: github.com/dokku/github-action
Official Github Action for deploying apps to a Dokku installation
| Name | Required | Description |
|---|---|---|
branch |
Optional | The branch to deploy when pushing to Dokku (default: master) Default: master |
ci_branch_name |
Optional | The command to run for the action (default: detected from GITHUB_REF) |
ci_commit |
Optional | The commit sha that will be pushed (default: detected from GITHUB_SHA) |
command |
Optional | The command to run for the action (default: deploy) Default: deploy |
deploy_docker_image |
Optional | A docker image to deploy via `git:from-image` |
deploy_user_name |
Optional | A username to use when deploying a docker image |
deploy_user_email |
Optional | The email to use when deploying a docker image |
git_push_flags |
Optional | A string containing a set of flags to set on push |
git_remote_url |
Required | The dokku app's git repository url (in SSH format) |
review_app_name |
Optional | The name of the review app to create or destroy |
ssh_host_key |
Optional | The results of running `ssh-keyscan -t rsa $HOST` |
ssh_private_key |
Required | A private SSH key that has push access to your Dokku instance |
ssh_passphrase |
Optional | Passphrase to use when interacting with an SSH key that has a passphrase |
trace |
Optional | Allows users to debug what the action is performing by enabling shell trace mode |
---
name: "Dokku"
description: "Official Github Action for deploying apps to a Dokku installation"
author: "Dokku"
branding:
icon: "upload-cloud"
color: "blue"
inputs:
branch:
description: "The branch to deploy when pushing to Dokku (default: master)"
required: false
default: "master"
ci_branch_name:
description: "The command to run for the action (default: detected from GITHUB_REF)"
required: false
default: ""
ci_commit:
description: "The commit sha that will be pushed (default: detected from GITHUB_SHA)"
required: false
default: ""
command:
description: "The command to run for the action (default: deploy)"
required: false
default: "deploy"
deploy_docker_image:
description: "A docker image to deploy via `git:from-image`"
required: false
default: ""
deploy_user_name:
description: "A username to use when deploying a docker image"
required: false
default: ""
deploy_user_email:
description: "The email to use when deploying a docker image"
required: false
default: ""
git_push_flags:
description: "A string containing a set of flags to set on push"
required: false
default: ""
git_remote_url:
description: "The dokku app's git repository url (in SSH format)"
required: true
review_app_name:
description: "The name of the review app to create or destroy"
required: false
default: ""
ssh_host_key:
description: "The results of running `ssh-keyscan -t rsa $HOST`"
required: false
default: ""
ssh_private_key:
description: "A private SSH key that has push access to your Dokku instance"
required: true
ssh_passphrase:
description: "Passphrase to use when interacting with an SSH key that has a passphrase"
required: false
default: ""
trace:
description: "Allows users to debug what the action is performing by enabling shell trace mode"
required: false
default: ""
runs:
using: "docker"
image: "Dockerfile"
entrypoint: "/bin/dokku-deploy"
post-entrypoint: "/bin/dokku-unlock"
post-if: cancelled()
env:
BRANCH: ${{ inputs.branch }}
CI_BRANCH_NAME: ${{ inputs.ci_branch_name }}
CI_COMMIT: ${{ inputs.ci_commit }}
COMMAND: ${{ inputs.command }}
DEPLOY_DOCKER_IMAGE: ${{ inputs.deploy_docker_image }}
DEPLOY_USER_NAME: ${{ inputs.deploy_user_name }}
DEPLOY_USER_EMAIL: ${{ inputs.deploy_user_email }}
GIT_PUSH_FLAGS: ${{ inputs.git_push_flags }}
GIT_REMOTE_URL: ${{ inputs.git_remote_url }}
REVIEW_APP_NAME: ${{ inputs.review_app_name }}
SSH_HOST_KEY: ${{ inputs.ssh_host_key }}
SSH_PRIVATE_KEY: ${{ inputs.ssh_private_key }}
SSH_PASSPHRASE: ${{ inputs.ssh_passphrase }}
TRACE: ${{ inputs.trace }}
Action ID: marketplace/appleboy/LLM-action
Author: appleboy
Publisher: appleboy
Repository: github.com/appleboy/LLM-action
GitHub Action to interact with OpenAI Compatible LLM services
| Name | Required | Description |
|---|---|---|
base_url |
Optional | Base URL for OpenAI Compatible API endpoint Default: https://api.openai.com/v1 |
api_key |
Required | API Key for authentication |
model |
Optional | Model name to use Default: gpt-4o |
skip_ssl_verify |
Optional | Skip SSL certificate verification Default: false |
system_prompt |
Optional | System prompt to set the context. Supports plain text, file path, or URL (http://, https://). For files, use absolute/relative path or file:// prefix. Supports Go templates with environment variables (e.g., {{.GITHUB_REPOSITORY}}, {{.MODEL}}). |
input_prompt |
Required | User input prompt for the LLM. Supports plain text, file path, or URL (http://, https://). For files, use absolute/relative path or file:// prefix. Supports Go templates with environment variables (e.g., {{.GITHUB_REPOSITORY}}, {{.MODEL}}). |
temperature |
Optional | Temperature for response randomness (0.0-2.0) Default: 0.7 |
max_tokens |
Optional | Maximum tokens in the response Default: 1000 |
tool_schema |
Optional | JSON schema for structured output via function calling. Supports plain text, file path, or URL. Supports Go templates with environment variables (e.g., {{.GITHUB_REPOSITORY}}). |
debug |
Optional | Enable debug mode to print all parameters Default: false |
headers |
Optional | Custom HTTP headers to include in API requests. Format: "Header1:Value1,Header2:Value2" or multiline with one header per line. Useful for log analysis or custom authentication. |
| Name | Description |
|---|---|
response |
The response from the LLM |
name: 'LLM Action'
description: 'GitHub Action to interact with OpenAI Compatible LLM services'
author: 'appleboy'
branding:
icon: 'message-square'
color: 'blue'
inputs:
base_url:
description: 'Base URL for OpenAI Compatible API endpoint'
required: false
default: 'https://api.openai.com/v1'
api_key:
description: 'API Key for authentication'
required: true
model:
description: 'Model name to use'
required: false
default: 'gpt-4o'
skip_ssl_verify:
description: 'Skip SSL certificate verification'
required: false
default: 'false'
system_prompt:
description: 'System prompt to set the context. Supports plain text, file path, or URL (http://, https://). For files, use absolute/relative path or file:// prefix. Supports Go templates with environment variables (e.g., {{.GITHUB_REPOSITORY}}, {{.MODEL}}).'
required: false
default: ''
input_prompt:
description: 'User input prompt for the LLM. Supports plain text, file path, or URL (http://, https://). For files, use absolute/relative path or file:// prefix. Supports Go templates with environment variables (e.g., {{.GITHUB_REPOSITORY}}, {{.MODEL}}).'
required: true
temperature:
description: 'Temperature for response randomness (0.0-2.0)'
required: false
default: '0.7'
max_tokens:
description: 'Maximum tokens in the response'
required: false
default: '1000'
tool_schema:
description: 'JSON schema for structured output via function calling. Supports plain text, file path, or URL. Supports Go templates with environment variables (e.g., {{.GITHUB_REPOSITORY}}).'
required: false
default: ''
debug:
description: 'Enable debug mode to print all parameters'
required: false
default: 'false'
headers:
description: 'Custom HTTP headers to include in API requests. Format: "Header1:Value1,Header2:Value2" or multiline with one header per line. Useful for log analysis or custom authentication.'
required: false
default: ''
outputs:
response:
description: 'The response from the LLM'
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/julia-actions/test-action
Author: Your name or organization here
Publisher: julia-actions
Repository: github.com/julia-actions/test-action
Provide a description here
| Name | Required | Description |
|---|---|---|
milliseconds |
Required | input description here Default: default value if applicable |
name: 'Your name here'
description: 'Provide a description here'
author: 'Your name or organization here'
inputs:
milliseconds: # change this
required: true
description: 'input description here'
default: 'default value if applicable'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/stefanzweifel/git-auto-commit-action
Author: Stefan Zweifel <stefan@stefanzweifel.dev>
Publisher: stefanzweifel
Repository: github.com/stefanzweifel/git-auto-commit-action
Automatically commits files which have been changed during the workflow run and push changes back to remote repository.
| Name | Required | Description |
|---|---|---|
commit_message |
Optional | Commit message Default: Apply automatic changes |
branch |
Optional | Git branch name, where changes should be pushed too. Required if Action is used on the `pull_request` event Default: ${{ github.head_ref }} |
commit_options |
Optional | Commit options (eg. --no-verify) |
add_options |
Optional | Add options (eg. -u) |
status_options |
Optional | Status options (eg. --untracked-files=no) |
file_pattern |
Optional | File pattern used for `git add`. For example `src/*.js` Default: . |
repository |
Optional | Local file path to the git repository. Defaults to the current directory (`.`) Default: . |
commit_user_name |
Optional | Name used for the commit user Default: github-actions[bot] |
commit_user_email |
Optional | Email address used for the commit user Default: 41898282+github-actions[bot]@users.noreply.github.com |
commit_author |
Optional | Value used for the commit author. Defaults to the username of whoever triggered this workflow run. Default: ${{ github.actor }} <${{ github.actor_id }}+${{ github.actor }}@users.noreply.github.com> |
tag_name |
Optional | Tag name used for creating a new git tag with the commit. Keep this empty, if no tag should be created. |
tagging_message |
Optional | Tagging message used for creating a new git tag with the commit. Keep this empty, if no tag should be created. |
push_options |
Optional | Push options (eg. --force) |
skip_dirty_check |
Optional | Skip the check if the git repository is dirty and always try to create a commit. |
skip_fetch |
Optional | Skip the call to git-fetch. |
skip_checkout |
Optional | Skip the call to git-checkout. |
disable_globbing |
Optional | Stop the shell from expanding filenames (https://www.gnu.org/software/bash/manual/html_node/Filename-Expansion.html) |
create_branch |
Optional | Create new branch with the name of `branch`-input in local and remote repository, if it doesn't exist yet. |
create_git_tag_only |
Optional | Perform a clean git tag and push, without commiting anything |
internal_git_binary |
Optional | Internal use only! Path to git binary used to check if git is available. (Don't change this!) Default: git |
| Name | Description |
|---|---|
changes_detected |
Value is "true", if the repository was dirty and file changes have been detected. Value is "false", if no changes have been detected. |
commit_hash |
Full hash of the created commit. Only present if the "changes_detected" output is "true". |
create_git_tag_only |
Value is "true", if a git tag was created using the `create_git_tag_only`-input. |
name: Git Auto Commit
description: 'Automatically commits files which have been changed during the workflow run and push changes back to remote repository.'
author: Stefan Zweifel <stefan@stefanzweifel.dev>
inputs:
commit_message:
description: Commit message
required: false
default: Apply automatic changes
branch:
description: Git branch name, where changes should be pushed too. Required if Action is used on the `pull_request` event
required: false
default: ${{ github.head_ref }}
commit_options:
description: Commit options (eg. --no-verify)
required: false
default: ''
add_options:
description: Add options (eg. -u)
required: false
default: ''
status_options:
description: Status options (eg. --untracked-files=no)
required: false
default: ''
file_pattern:
description: File pattern used for `git add`. For example `src/*.js`
required: false
default: '.'
repository:
description: Local file path to the git repository. Defaults to the current directory (`.`)
required: false
default: '.'
commit_user_name:
description: Name used for the commit user
required: false
default: github-actions[bot]
commit_user_email:
description: Email address used for the commit user
required: false
default: 41898282+github-actions[bot]@users.noreply.github.com
commit_author:
description: Value used for the commit author. Defaults to the username of whoever triggered this workflow run.
required: false
default: ${{ github.actor }} <${{ github.actor_id }}+${{ github.actor }}@users.noreply.github.com>
tag_name:
description: Tag name used for creating a new git tag with the commit. Keep this empty, if no tag should be created.
required: false
default: ''
tagging_message:
description: Tagging message used for creating a new git tag with the commit. Keep this empty, if no tag should be created.
required: false
default: ''
push_options:
description: Push options (eg. --force)
required: false
default: ''
skip_dirty_check:
description: Skip the check if the git repository is dirty and always try to create a commit.
required: false
default: false
skip_fetch:
description: Skip the call to git-fetch.
required: false
default: false
skip_checkout:
description: Skip the call to git-checkout.
required: false
default: false
disable_globbing:
description: Stop the shell from expanding filenames (https://www.gnu.org/software/bash/manual/html_node/Filename-Expansion.html)
default: false
create_branch:
description: Create new branch with the name of `branch`-input in local and remote repository, if it doesn't exist yet.
default: false
create_git_tag_only:
description: Perform a clean git tag and push, without commiting anything
required: false
default: false
internal_git_binary:
description: Internal use only! Path to git binary used to check if git is available. (Don't change this!)
default: git
outputs:
changes_detected:
description: Value is "true", if the repository was dirty and file changes have been detected. Value is "false", if no changes have been detected.
commit_hash:
description: Full hash of the created commit. Only present if the "changes_detected" output is "true".
create_git_tag_only:
description: Value is "true", if a git tag was created using the `create_git_tag_only`-input.
runs:
using: 'node24'
main: 'index.js'
branding:
icon: 'git-commit'
color: orange
Action ID: marketplace/amirisback/android-flutter-injected
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-flutter-injected
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/google-github-actions/release-please-action
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/release-please-action
automated releases based on conventional commits
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub token for creating and grooming release PRs, defaults to using secrets.GITHUB_TOKEN Default: ${{ github.token }} |
release-type |
Optional | what type of release is this, one of (ruby, python, node, terraform-module) |
path |
Optional | create a release from a path other than the repository's root |
target-branch |
Optional | branch to open pull release PR against (detected by default) |
config-file |
Optional | where can the config file be found in the project? |
manifest-file |
Optional | where can the manifest file be found in the project? |
repo-url |
Optional | configure github repository URL. Default `process.env.GITHUB_REPOSITORY` Default: ${{ github.repository }} |
github-api-url |
Optional | configure github API URL. Default `https://api.github.com` Default: ${{ github.api_url }} |
github-graphql-url |
Optional | configure github GraphQL URL. Default `https://api.github.com` Default: ${{ github.graphql_url }} |
fork |
Optional | If true, send the PR from a fork. This requires the token to be a user that can create forks (e.g. not the default GITHUB_TOKEN) |
include-component-in-tag |
Optional | If true, add prefix to tags and branches, allowing multiple libraries to be released from the same repository |
proxy-server |
Optional | set proxy sever when you run this action behind a proxy. format is host:port e.g. proxy-host.com:8080 |
skip-github-release |
Optional | if set to true, then do not try to tag releases |
skip-github-pull-request |
Optional | if set to true, then do not try to open pull requests |
changelog-host |
Optional | The proto://host where commits live. Default `https://github.com` Default: ${{ github.server_url }} |
name: 'release-please-action'
description: 'automated releases based on conventional commits'
author: Google LLC
inputs:
token:
description: 'GitHub token for creating and grooming release PRs, defaults to using secrets.GITHUB_TOKEN'
required: false
default: ${{ github.token }}
release-type:
description: 'what type of release is this, one of (ruby, python, node, terraform-module)'
required: false
default: ''
path:
description: "create a release from a path other than the repository's root"
required: false
default: ''
target-branch:
description: branch to open pull release PR against (detected by default)
required: false
default: ''
config-file:
description: 'where can the config file be found in the project?'
required: false
default: ''
manifest-file:
description: 'where can the manifest file be found in the project?'
required: false
default: ''
repo-url:
description: 'configure github repository URL. Default `process.env.GITHUB_REPOSITORY`'
required: false
default: ${{ github.repository }}
github-api-url:
description: 'configure github API URL. Default `https://api.github.com`'
required: false
default: ${{ github.api_url }}
github-graphql-url:
description: 'configure github GraphQL URL. Default `https://api.github.com`'
required: false
default: ${{ github.graphql_url }}
fork:
description: 'If true, send the PR from a fork. This requires the token to be a user that can create forks (e.g. not the default GITHUB_TOKEN)'
required: false
default: false
include-component-in-tag:
description: 'If true, add prefix to tags and branches, allowing multiple libraries to be released from the same repository'
required: false
default: false
proxy-server:
description: 'set proxy sever when you run this action behind a proxy. format is host:port e.g. proxy-host.com:8080'
required: false
default: ''
skip-github-release:
description: 'if set to true, then do not try to tag releases'
required: false
default: false
skip-github-pull-request:
description: 'if set to true, then do not try to open pull requests'
required: false
default: false
changelog-host:
description: 'The proto://host where commits live. Default `https://github.com`'
required: false
default: ${{ github.server_url }}
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/julia-actions/julia-downgrade-compat
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-downgrade-compat
Uses Resolver.jl to find and install minimal compatible versions of dependencies.
| Name | Required | Description |
|---|---|---|
skip |
Optional | Comma-separated list of packages to not downgrade. |
projects |
Optional | Comma-separated list of Julia projects to resolve. Default: . |
mode |
Optional | Downgrade mode: deps (direct dependencies), alldeps (deps + weakdeps), weakdeps (only weakdeps), forcedeps (deps with strict lower bound verification) Default: alldeps |
julia_version |
Optional | Julia version to use with resolver (default: 1.10) Default: 1.10 |
name: 'Downgrade Julia compat entries'
description: 'Uses Resolver.jl to find and install minimal compatible versions of dependencies.'
inputs:
skip:
description: 'Comma-separated list of packages to not downgrade.'
default: ''
projects:
description: 'Comma-separated list of Julia projects to resolve.'
default: '.'
mode:
description: 'Downgrade mode: deps (direct dependencies), alldeps (deps + weakdeps), weakdeps (only weakdeps), forcedeps (deps with strict lower bound verification)'
default: 'alldeps'
julia_version:
description: 'Julia version to use with resolver (default: 1.10)'
default: '1.10'
runs:
using: "composite"
steps:
- run: julia --color=yes "${{ github.action_path }}/downgrade.jl" "${{ inputs.skip }}" "${{ inputs.projects }}" "${{ inputs.mode }}" "${{ inputs.julia_version }}"
shell: bash
branding:
icon: trending-down
color: purple
Action ID: marketplace/google-github-actions/deploy-gke
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/deploy-gke
Use gke-deploy cli to deploy an image
| Name | Required | Description |
|---|---|---|
image |
Required | Image to be deployed |
app_name |
Required | Application name of the Kubernetes deployment |
region |
Required | Region/zone of GKE cluster to deploy to. |
cluster_name |
Required | Name of GKE cluster to deploy to. |
project_id |
Required | Project of GKE cluster to deploy to. |
namespace |
Optional | Namespace of GKE cluster to deploy to. If not provided, it will not be passed to the binary. |
expose |
Optional | The port provided will be used to expose the deployed workload object (i.e., port and targetPort will be set to the value provided in this flag). If not provided, it will not be passed to the binary. |
k8s_manifests |
Optional | Local or GCS path to configuration file or directory of configuration files to use to create Kubernetes objects (file or files in directory must end in ".yml" or ".yaml"). Prefix this value with "gs://" to indicate a GCS path. If not provided, it will not be passed to the binary. |
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Deploy GKE'
description: |-
Use gke-deploy cli to deploy an image
author: 'Google LLC'
inputs:
image:
description: |-
Image to be deployed
required: true
app_name:
description: |-
Application name of the Kubernetes deployment
required: true
region:
description: |-
Region/zone of GKE cluster to deploy to.
required: true
cluster_name:
description: |-
Name of GKE cluster to deploy to.
required: true
project_id:
description: |-
Project of GKE cluster to deploy to.
required: true
namespace:
description: |-
Namespace of GKE cluster to deploy to.
If not provided, it will not be passed to the binary.
required: false
expose:
description: |-
The port provided will be used to expose the deployed workload object (i.e., port and targetPort will be set to the value provided in this flag).
If not provided, it will not be passed to the binary.
required: false
k8s_manifests:
description: |-
Local or GCS path to configuration file or directory of configuration files to use to create Kubernetes objects (file or files in directory must end in ".yml" or ".yaml").
Prefix this value with "gs://" to indicate a GCS path.
If not provided, it will not be passed to the binary.
required: false
branding:
icon: 'lock'
color: 'blue'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- '${{ inputs.image }}'
- '${{ inputs.app_name }}'
- '${{ inputs.region }}'
- '${{ inputs.cluster_name }}'
- '${{ inputs.project_id }}'
- '${{ inputs.namespace }}'
- '${{ inputs.expose }}'
Action ID: marketplace/azure/k8s-lint
Author: Unknown
Publisher: azure
Repository: github.com/azure/k8s-lint
Lint k8s manifest files
| Name | Required | Description |
|---|---|---|
manifests |
Required | Path to the manifest files which will be used for deployment. |
lintType |
Required | Acceptable values: kubeconform, dryrun Default: kubeconform |
kubeconformOpts |
Optional | Options that are passed to kubeconform e.g. -schema-location Default: -summary |
namespace |
Optional | Target Kubernetes namespace. If the namespace is not provided, commands will run in the default namespace. Default: default |
name: 'Lint k8s manifest files'
description: 'Lint k8s manifest files'
inputs:
# Please ensure you have used either azure/k8s-actions/aks-set-context or azure/k8s-actions/k8s-set-context in the workflow before this action if using dryrun
manifests:
description: 'Path to the manifest files which will be used for deployment.'
required: true
lintType:
description: 'Acceptable values: kubeconform, dryrun'
required: true
default: 'kubeconform'
kubeconformOpts:
description: 'Options that are passed to kubeconform e.g. -schema-location'
required: false
default: '-summary'
namespace:
description: 'Target Kubernetes namespace. If the namespace is not provided, commands will run in the default namespace.'
required: false
default: 'default'
branding:
color: 'green' # optional, decorates the entry in the GitHub Marketplace
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/codecov/test-results-action
Author: Codecov
Publisher: codecov
Repository: github.com/codecov/test-results-action
GitHub Action that uploads test result reports for your repository to codecov.io
| Name | Required | Description |
|---|---|---|
token |
Optional | Repository Codecov token. Used to authorize report uploads |
binary |
Optional | The file location of a pre-downloaded version of the CLI. If specified, integrity checking will be bypassed. |
codecov_yml_path |
Optional | Specify the path to the Codecov YML |
commit_parent |
Optional | Override to specify the parent commit SHA |
directory |
Optional | Directory to search for test result reports. |
disable_search |
Optional | Disable search for test result files. This is helpful when specifying what files you want to upload with the --file option. |
dry_run |
Optional | Don't upload files to Codecov |
env_vars |
Optional | Environment variables to tag the upload with (e.g. PYTHON | OS,PYTHON) |
exclude |
Optional | Folders to exclude from search |
fail_ci_if_error |
Optional | Specify whether or not CI build should fail if Codecov runs into an error during upload |
file |
Optional | Path to test result file to upload |
files |
Optional | Comma-separated list of files to upload |
flags |
Optional | Flag upload to group test results (e.g. py3.10 | py3.11 | py3.12) |
handle_no_reports_found |
Optional | Raise no exceptions when no test result reports found |
name |
Optional | User defined upload name. Visible in Codecov UI |
os |
Optional | Override the assumed OS. Options are linux | macos | windows. |
override_branch |
Optional | Specify the branch name |
override_build |
Optional | Specify the build number |
override_build_url |
Optional | The URL of the build where this is running |
override_commit |
Optional | Specify the commit SHA |
override_pr |
Optional | Specify the pull request number |
report_code |
Optional | The code of the report. If unsure, do not include |
root_dir |
Optional | Used when not in git/hg project to identify project root directory |
slug |
Optional | Specify the slug manually (Enterprise use) |
url |
Optional | Specify the base url to upload (Enterprise use) |
use_oidc |
Optional | Use OIDC to authenticate with Codecov |
verbose |
Optional | Specify whether the Codecov output should be verbose |
version |
Optional | Specify which version of the Codecov CLI should be used. Defaults to `latest` |
working-directory |
Optional | Directory in which to execute codecov.sh |
name: 'Codecov Test Result Action'
description: 'GitHub Action that uploads test result reports for your repository to codecov.io'
author: 'Codecov'
inputs:
token:
description: 'Repository Codecov token. Used to authorize report uploads'
required: false
binary:
description: 'The file location of a pre-downloaded version of the CLI. If specified, integrity checking will be bypassed.'
required: false
codecov_yml_path:
description: 'Specify the path to the Codecov YML'
required: false
commit_parent:
description: 'Override to specify the parent commit SHA'
required: false
directory:
description: 'Directory to search for test result reports.'
required: false
disable_search:
description: 'Disable search for test result files. This is helpful when specifying what files you want to upload with the --file option.'
required: false
dry_run:
description: "Don't upload files to Codecov"
required: false
env_vars:
description: 'Environment variables to tag the upload with (e.g. PYTHON | OS,PYTHON)'
required: false
exclude:
description: 'Folders to exclude from search'
required: false
fail_ci_if_error:
description: 'Specify whether or not CI build should fail if Codecov runs into an error during upload'
required: false
file:
description: 'Path to test result file to upload'
required: false
files:
description: 'Comma-separated list of files to upload'
required: false
flags:
description: 'Flag upload to group test results (e.g. py3.10 | py3.11 | py3.12)'
required: false
handle_no_reports_found:
description: 'Raise no exceptions when no test result reports found'
required: false
name:
description: 'User defined upload name. Visible in Codecov UI'
required: false
os:
description: 'Override the assumed OS. Options are linux | macos | windows.'
required: false
override_branch:
description: 'Specify the branch name'
required: false
override_build:
description: 'Specify the build number'
required: false
override_build_url:
description: 'The URL of the build where this is running'
required: false
override_commit:
description: 'Specify the commit SHA'
required: false
override_pr:
description: 'Specify the pull request number'
required: false
report_code:
description: 'The code of the report. If unsure, do not include'
required: false
root_dir:
description: 'Used when not in git/hg project to identify project root directory'
required: false
slug:
description: 'Specify the slug manually (Enterprise use)'
required: false
url:
description: 'Specify the base url to upload (Enterprise use)'
required: false
use_oidc:
description: 'Use OIDC to authenticate with Codecov'
required: false
verbose:
description: 'Specify whether the Codecov output should be verbose'
required: false
version:
description: 'Specify which version of the Codecov CLI should be used. Defaults to `latest`'
required: false
working-directory:
description: 'Directory in which to execute codecov.sh'
required: false
branding:
color: 'red'
icon: 'umbrella'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/appleboy/gh-pages-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/gh-pages-action
Deploy the static file to GitHub Page
| Name | Required | Description |
|---|---|---|
username |
Optional | username |
password |
Optional | password |
upstream_name |
Optional | git upstream to target Default: origin |
target_branch |
Optional | git branch to target Default: gh-pages |
temporary_base |
Optional | temporary directory for pages pull Default: .tmp |
pages_directory |
Optional | directory of content to publish Default: docs |
target_directory |
Optional | directory of content to sync |
exclude |
Optional | exclude files matching PATTERN |
commit_author |
Optional | git author name Default: GitHub Action |
commit_author_email |
Optional | git author email Default: github-action@users.noreply.github.com |
remote_url |
Optional | git remote url |
workspace |
Optional | git clone path |
name: 'GitHub Pages Deploy'
description: 'Deploy the static file to GitHub Page'
author: 'Bo-Yi Wu'
inputs:
username:
description: 'username'
password:
description: 'password'
upstream_name:
description: 'git upstream to target'
default: 'origin'
target_branch:
description: 'git branch to target'
default: 'gh-pages'
temporary_base:
description: 'temporary directory for pages pull'
default: '.tmp'
pages_directory:
description: 'directory of content to publish'
default: 'docs'
target_directory:
description: 'directory of content to sync'
exclude:
description: 'exclude files matching PATTERN'
commit_author:
description: 'git author name'
default: 'GitHub Action'
commit_author_email:
description: 'git author email'
default: 'github-action@users.noreply.github.com'
remote_url:
description: 'git remote url'
workspace:
description: 'git clone path'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'cloud'
color: 'gray-dark'
Action ID: marketplace/aws-actions/codeguru-security
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/codeguru-security
AWS CodeGuru Security Action
| Name | Required | Description |
|---|---|---|
source_path |
Optional | Path to source repository Default: . |
aws_region |
Required | AWS region where you want to run workflow |
scan_name |
Required | The name of the scan Default: CGS-Github-${{ github.event.repository.name }} |
analysis_type |
Optional | The type of analysis you want CodeGuru Security to perform in the scan, either Security or All. Defaults to Security type if missing. |
fail_on_severity |
Optional | Fails the action run if any finding is higher than or equal to severity is provided. Default the script does not break the action run. [example: Info, Low, Medium, High, Critical] |
name: 'CodeGuru Security'
description: 'AWS CodeGuru Security Action'
branding:
icon: 'cloud'
color: 'orange'
inputs:
source_path:
description: 'Path to source repository'
default: .
required: false
aws_region:
description: 'AWS region where you want to run workflow'
required: true
scan_name:
description: 'The name of the scan'
default: CGS-Github-${{ github.event.repository.name }}
required: true
analysis_type:
description: 'The type of analysis you want CodeGuru Security to perform in the scan, either Security or All. Defaults to Security type if missing.'
required: false
fail_on_severity:
description: 'Fails the action run if any finding is higher than or equal to severity is provided. Default the script does not break the action run. [example: Info, Low, Medium, High, Critical]'
required: false
runs:
using: docker
image: docker://public.ecr.aws/l6c8c5q3/codegurusecurity-actions-public:latest
args:
- --source_path
- ${{ inputs.source_path }}
- --aws_region
- ${{ inputs.aws_region }}
- --scan_name
- ${{ inputs.scan_name }}
- --output_file_prefix
- codeguru-security-results
- --output_file_format
- SARIF
- --analysis_type
- ${{ inputs.analysis_type }}
- --fail_on_severity
- ${{ inputs.fail_on_severity }}
Action ID: marketplace/julia-actions/julia-lint
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-lint
Lint Julia files
| Name | Description |
|---|---|
lint-results |
name: 'Julia Lint'
description: 'Lint Julia files'
outputs:
lint-results:
value: ${{ steps.lint-step.outputs.lint-results }}
runs:
using: "composite"
steps:
- name: Compute Manifest hash
id: project-hash
shell: pwsh
run: |
$ourHash = Get-FileHash -LiteralPath "$env:GITHUB_ACTION_PATH\Manifest.toml"
"MANIFEST_HASH=$($ourHash.Hash)" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
- name: Check Julia version
shell: bash
id: julia-version
run: |
echo "juliaversion=$(julia -v)" >> $GITHUB_OUTPUT
- uses: actions/cache@v4
id: cache-project
with:
path: ${{ github.action_path }}/.julia
key: julia-lint-cache-${{ runner.os }}-${{ steps.julia-version.outputs.juliaversion }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Install and precompile
if: steps.cache-project.outputs.cache-hit != 'true'
run: julia --project=${{ github.action_path }} -e 'import Pkg; Pkg.instantiate()'
shell: bash
env:
JULIA_DEPOT_PATH: ${{ github.action_path }}/.julia
- uses: actions/cache/save@v4
if: steps.cache-project.outputs.cache-hit != 'true'
with:
path: ${{ github.action_path }}/.julia
key: julia-lint-cache-${{ runner.os }}-${{ steps.julia-version.outputs.juliaversion }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Lint
id: lint-step
run: julia --project=${{ github.action_path }} ${{ github.action_path }}/main.jl
shell: bash
env:
JULIA_DEPOT_PATH: ${{ github.action_path }}/.julia
Action ID: marketplace/pgrimaud/action-shopify-theme-check
Author: Pierre Grimaud
Publisher: pgrimaud
Repository: github.com/pgrimaud/action-shopify-theme-check
Run Shopify Theme Check on GitHub Actions
name: 'Run Shopify Theme Check'
author: 'Pierre Grimaud'
description: 'Run Shopify Theme Check on GitHub Actions'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'shopping-bag'
color: 'green'
Action ID: marketplace/TryGhost/action-ghost-release
Author: Unknown
Publisher: TryGhost
Repository: github.com/TryGhost/action-ghost-release
Release a new version of Ghost
name: 'Ghost Release'
description: 'Release a new version of Ghost'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/docker/setup-docker-action
Author: docker
Publisher: docker
Repository: github.com/docker/setup-docker-action
Set up Docker for use in GitHub Actions by downloading and installing a version of Docker CE
| Name | Required | Description |
|---|---|---|
version |
Optional | Docker CE version. (e.g, v24.0.9) Default: latest |
channel |
Optional | Docker CE channel. (e.g, stable, edge or test) |
daemon-config |
Optional | Docker daemon JSON configuration |
tcp-port |
Optional | TCP port to expose the Docker API locally |
context |
Optional | Docker context name. (default setup-docker-action) |
set-host |
Optional | Set DOCKER_HOST environment variable to docker socket path Default: false |
rootless |
Optional | Enable Docker rootless mode Default: false |
runtime-basedir |
Optional | Docker runtime base directory |
github-token |
Optional | GitHub Token used to get releases and download assets Default: ${{ github.token }} |
| Name | Description |
|---|---|
sock |
Docker socket path |
tcp |
Docker TCP address if tcp-port is set |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: Docker Setup Docker
description: Set up Docker for use in GitHub Actions by downloading and installing a version of Docker CE
author: docker
branding:
icon: 'anchor'
color: 'blue'
inputs:
version:
description: 'Docker CE version. (e.g, v24.0.9)'
required: false
default: 'latest'
channel:
description: 'Docker CE channel. (e.g, stable, edge or test)'
required: false
daemon-config:
description: 'Docker daemon JSON configuration'
required: false
tcp-port:
description: 'TCP port to expose the Docker API locally'
required: false
context:
description: 'Docker context name. (default setup-docker-action)'
required: false
set-host:
description: 'Set DOCKER_HOST environment variable to docker socket path'
default: 'false'
required: false
rootless:
description: 'Enable Docker rootless mode'
default: 'false'
required: false
runtime-basedir:
description: 'Docker runtime base directory'
required: false
github-token:
description: "GitHub Token used to get releases and download assets"
default: ${{ github.token }}
required: false
outputs:
sock:
description: "Docker socket path"
tcp:
description: "Docker TCP address if tcp-port is set"
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/azure/deploy-to-aks
Author: Unknown
Publisher: azure
Repository: github.com/azure/deploy-to-aks
Deploy to an Azure Kubernetes Service cluster
| Name | Required | Description |
|---|---|---|
method |
Required | Acceptable values: kubeconfig or service-account Default: kubeconfig |
kubeconfig |
Optional | Contents of kubeconfig file |
context |
Optional | If your kubeconfig has multiple contexts, use this field to use a specific context, otherwise the default one would be chosen |
k8s-url |
Optional | Cluster Url |
k8s-secret |
Optional | Service account secret. Run kubectl get serviceaccounts <service-account-name> -o yaml and copy the service-account-secret-name. Copy the ouptut of kubectl get secret <service-account-secret-name> -o yaml |
container-registry-url |
Optional | Container registry url |
container-registry-username |
Optional | Container registry username |
container-registry-password |
Optional | Container registry password |
container-registry-email |
Optional | Container registry email |
secret-type |
Required | Type of Kubernetes secret. For example, docker-registry or generic Default: docker-registry |
secret-name |
Required | Name of the secret. You can use this secret name in the Kubernetes YAML configuration file. |
arguments |
Optional | Specify keys and literal values to insert in generic type secret. For example, --from-literal=key1=value1 --from-literal=key2="top secret". |
manifests |
Required | Path to the manifest files which will be used for deployment. |
images |
Optional | Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files Example: contosodemo.azurecr.io/helloworld:test |
imagepullsecrets |
Optional | Name of a docker-registry secret that has already been set up within the cluster. Each of these secret names are added under imagePullSecrets field for the workloads found in the input manifest files |
kubectl-version |
Optional | Version of kubectl. Installs a specific version of kubectl binary |
strategy |
Optional | Deployment strategy to be used. Allowed values are none, canary Default: none |
traffic-split-method |
Optional | Traffic split method to be used. Allowed values are pod, smi Default: pod |
baseline-and-canary-replicas |
Optional | Baseline and canary replicas count; valid value i.e between 0 to 100. |
percentage |
Optional | Percentage of traffic redirect to canary deployment |
args |
Optional | Arguments |
action |
Required | deploy/promote/reject Default: deploy |
namespace |
Optional | Choose the target Kubernetes namespace. If the namespace is not provided, the commands will run in the default namespace. |
| Name | Description |
|---|---|
secret-name |
Secret name |
name: 'Deploy to AKS'
description: 'Deploy to an Azure Kubernetes Service cluster'
inputs:
# inputs specific to set context
method:
description: 'Acceptable values: kubeconfig or service-account'
required: true
default: 'kubeconfig'
kubeconfig:
description: 'Contents of kubeconfig file'
required: false
default: ''
context:
description: 'If your kubeconfig has multiple contexts, use this field to use a specific context, otherwise the default one would be chosen'
required: false
default: ''
k8s-url:
description: 'Cluster Url'
required: false
default: ''
k8s-secret:
description: 'Service account secret. Run kubectl get serviceaccounts <service-account-name> -o yaml and copy the service-account-secret-name. Copy the ouptut of kubectl get secret <service-account-secret-name> -o yaml'
required: false
default: ''
# inputs specific to create secret
container-registry-url:
description: 'Container registry url'
required: false
container-registry-username:
description: 'Container registry username'
required: false
container-registry-password:
description: 'Container registry password'
required: false
container-registry-email:
description: 'Container registry email'
required: false
secret-type:
description: 'Type of Kubernetes secret. For example, docker-registry or generic'
required: true
default: 'docker-registry'
secret-name:
description: 'Name of the secret. You can use this secret name in the Kubernetes YAML configuration file.'
required: true
arguments:
description: 'Specify keys and literal values to insert in generic type secret. For example, --from-literal=key1=value1 --from-literal=key2="top secret".'
required: false
# inputs specific to deploy to aks
manifests:
description: 'Path to the manifest files which will be used for deployment.'
required: true
default: ''
images:
description: 'Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files Example: contosodemo.azurecr.io/helloworld:test'
required: false
imagepullsecrets:
description: 'Name of a docker-registry secret that has already been set up within the cluster. Each of these secret names are added under imagePullSecrets field for the workloads found in the input manifest files'
required: false
kubectl-version:
description: 'Version of kubectl. Installs a specific version of kubectl binary'
required: false
strategy:
description: 'Deployment strategy to be used. Allowed values are none, canary'
required: false
default: 'none'
traffic-split-method:
description: "Traffic split method to be used. Allowed values are pod, smi"
required: false
default: 'pod'
baseline-and-canary-replicas:
description: 'Baseline and canary replicas count; valid value i.e between 0 to 100.'
required: false
default: 0
percentage:
description: 'Percentage of traffic redirect to canary deployment'
required: false
default: 0
args:
description: 'Arguments'
required: false
action:
description: 'deploy/promote/reject'
required: true
default: 'deploy'
# Common Inputs
namespace:
description: 'Choose the target Kubernetes namespace. If the namespace is not provided, the commands will run in the default namespace.'
required: false
outputs:
secret-name:
description: 'Secret name'
branding:
color: 'green' # optional, decorates the entry in the GitHub Marketplace
runs:
using: 'node12'
main: 'lib/run.js'
Action ID: marketplace/dflook/tofu-fmt-check
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-fmt-check
Check that OpenTofu configuration files are in canonical format
| Name | Required | Description |
|---|---|---|
path |
Optional | The path containing OpenTofu files to check the formatting of. Default: . |
workspace |
Optional | OpenTofu workspace to inspect when discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
Default: default |
variables |
Optional | Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified. See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified. See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure` because the format check failed, this will be set to 'check-failed'. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a step when the format check fails. |
name: tofu-fmt-check
description: Check that OpenTofu configuration files are in canonical format
author: Daniel Flook
inputs:
path:
description: The path containing OpenTofu files to check the formatting of.
required: false
default: "."
workspace:
description: |
OpenTofu workspace to inspect when discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: "default"
variables:
description: |
Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: |
List of OpenTofu backend config values, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
outputs:
failure-reason:
description: |
When the job outcome is `failure` because the format check failed, this will be set to 'check-failed'.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a step when the format check fails.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/fmt-check.sh
branding:
icon: globe
color: purple
Action ID: marketplace/dflook/tofu-refresh
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-refresh
Refresh OpenTofu state
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the OpenTofu root module to refresh state for Default: . |
workspace |
Optional | OpenTofu workspace to run the refresh state in Default: default |
variables |
Optional | Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
target |
Optional | List of resources to target, one per line. The refresh will be limited to these resources and their dependencies. |
exclude |
Optional | List of resources to exclude from the refresh operation, one per line. The refresh will include all resources except the specified ones and their dependencies. Requires OpenTofu 1.9+. |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `refresh-failed` - The OpenTofu apply operation failed. - `state-locked` - The Terraform state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
run_id |
If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id. |
name: tofu-refresh
description: Refresh OpenTofu state
author: Daniel Flook
inputs:
path:
description: Path to the OpenTofu root module to refresh state for
required: false
default: "."
workspace:
description: OpenTofu workspace to run the refresh state in
required: false
default: "default"
variables:
description: |
Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
target:
description: |
List of resources to target, one per line.
The refresh will be limited to these resources and their dependencies.
required: false
default: ""
exclude:
description: |
List of resources to exclude from the refresh operation, one per line.
The refresh will include all resources except the specified ones and their dependencies.
Requires OpenTofu 1.9+.
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `refresh-failed` - The OpenTofu apply operation failed.
- `state-locked` - The Terraform state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
run_id:
description: If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/refresh.sh
branding:
icon: globe
color: purple
Action ID: marketplace/JamesIves/github-pages-deploy-action
Author: James Ives <iam@jamesiv.es>
Publisher: JamesIves
Repository: github.com/JamesIves/github-pages-deploy-action
This action will handle the deployment process of your project to GitHub Pages.
| Name | Required | Description |
|---|---|---|
ssh-key |
Optional | This option allows you to define a private SSH key to be used in conjunction with a repository deployment key to deploy using SSH. The private key should be stored in the `secrets / with` menu **as a secret**. The public should be stored in the repositories deployment keys menu and be given write access. Alternatively you can set this field to `true` to enable SSH endpoints for deployment without configuring the ssh client. This can be useful if you've already setup the SSH client using another package or action in a previous step. |
token |
Optional | This option defaults to the repository scoped GitHub Token. However if you need more permissions for things such as deploying to another repository, you can add a Personal Access Token (PAT) here. This should be stored in the `secrets / with` menu **as a secret**.
We recommend using a service account with the least permissions necessary and when generating a new PAT that you select the least permission scopes required.
[Learn more about creating and using encrypted secrets here.](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
Default: ${{ github.token }} |
branch |
Optional | This is the branch you wish to deploy to, for example gh-pages or docs. Default: gh-pages |
folder |
Required | The folder in your repository that you want to deploy. If your build script compiles into a directory named build you would put it here. Folder paths cannot have a leading / or ./. If you wish to deploy the root directory you can place a . here. |
target-folder |
Optional | If you would like to push the contents of the deployment folder into a specific directory on the deployment branch you can specify it here. |
commit-message |
Optional | If you need to customize the commit message for an integration you can do so. |
clean |
Optional | If your project generates hashed files on build you can use this option to automatically delete them from the target folder on the deployment branch with each deploy. This option is on by default and can be toggled off by setting it to false. Default: True |
clean-exclude |
Optional | If you need to use clean but you would like to preserve certain files or folders you can use this option. This should contain each pattern as a single line in a multiline string. |
dry-run |
Optional | Do not actually push back, but use `--dry-run` on `git push` invocations instead. |
force |
Optional | Whether to force-push and overwrite any existing deployment. Setting this to false will attempt to rebase simultaneous deployments. This option is on by default and can be toggled off by setting it to false. Default: True |
git-config-name |
Optional | Allows you to customize the name that is attached to the GitHub config which is used when pushing the deployment commits. If this is not included it will use the name in the GitHub context, followed by the name of the action. |
git-config-email |
Optional | Allows you to customize the email that is attached to the GitHub config which is used when pushing the deployment commits. If this is not included it will use the email in the GitHub context, followed by a generic noreply GitHub email. |
repository-name |
Optional | Allows you to specify a different repository path so long as you have permissions to push to it. This should be formatted like so: JamesIves/github-pages-deploy-action |
tag |
Optional | Add a tag to the commit, this can be used like so: 'v0.1'. Only works when 'dry-run' is not used. |
single-commit |
Optional | This option can be used if you'd prefer to have a single commit on the deployment branch instead of maintaining the full history. |
silent |
Optional | Silences the action output preventing it from displaying git messages. |
attempt-limit |
Optional | How many rebase attempts to make before suspending the job. This option defaults to `3` and may be useful to increase when there are multiple deployments in a single branch. Default: 3 |
| Name | Description |
|---|---|
deployment-status |
The status of the deployment that indicates if the run failed or passed. Possible outputs include: success|failed|skipped |
name: 'Deploy to GitHub Pages'
description: 'This action will handle the deployment process of your project to GitHub Pages.'
author: 'James Ives <iam@jamesiv.es>'
runs:
using: 'node20'
main: 'lib/main.js'
branding:
icon: 'git-commit'
color: 'orange'
inputs:
ssh-key:
description: >
This option allows you to define a private SSH key to be used in conjunction with a repository deployment key to deploy using SSH.
The private key should be stored in the `secrets / with` menu **as a secret**. The public should be stored in the repositories deployment
keys menu and be given write access.
Alternatively you can set this field to `true` to enable SSH endpoints for deployment without configuring the ssh client. This can be useful if you've
already setup the SSH client using another package or action in a previous step.
required: false
token:
description: >
This option defaults to the repository scoped GitHub Token.
However if you need more permissions for things such as deploying to another repository, you can add a Personal Access Token (PAT) here.
This should be stored in the `secrets / with` menu **as a secret**.
We recommend using a service account with the least permissions necessary
and when generating a new PAT that you select the least permission scopes required.
[Learn more about creating and using encrypted secrets here.](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
required: false
default: ${{ github.token }}
branch:
description: 'This is the branch you wish to deploy to, for example gh-pages or docs.'
required: false
default: gh-pages
folder:
description: 'The folder in your repository that you want to deploy. If your build script compiles into a directory named build you would put it here. Folder paths cannot have a leading / or ./. If you wish to deploy the root directory you can place a . here.'
required: true
target-folder:
description: 'If you would like to push the contents of the deployment folder into a specific directory on the deployment branch you can specify it here.'
required: false
commit-message:
description: 'If you need to customize the commit message for an integration you can do so.'
required: false
clean:
description: 'If your project generates hashed files on build you can use this option to automatically delete them from the target folder on the deployment branch with each deploy. This option is on by default and can be toggled off by setting it to false.'
required: false
default: true
clean-exclude:
description: 'If you need to use clean but you would like to preserve certain files or folders you can use this option. This should contain each pattern as a single line in a multiline string.'
required: false
dry-run:
description: 'Do not actually push back, but use `--dry-run` on `git push` invocations instead.'
required: false
force:
description: 'Whether to force-push and overwrite any existing deployment. Setting this to false will attempt to rebase simultaneous deployments. This option is on by default and can be toggled off by setting it to false.'
required: false
default: true
git-config-name:
description: 'Allows you to customize the name that is attached to the GitHub config which is used when pushing the deployment commits. If this is not included it will use the name in the GitHub context, followed by the name of the action.'
required: false
git-config-email:
description: 'Allows you to customize the email that is attached to the GitHub config which is used when pushing the deployment commits. If this is not included it will use the email in the GitHub context, followed by a generic noreply GitHub email.'
required: false
repository-name:
description: 'Allows you to specify a different repository path so long as you have permissions to push to it. This should be formatted like so: JamesIves/github-pages-deploy-action'
required: false
tag:
description: "Add a tag to the commit, this can be used like so: 'v0.1'. Only works when 'dry-run' is not used."
required: false
single-commit:
description: "This option can be used if you'd prefer to have a single commit on the deployment branch instead of maintaining the full history."
required: false
silent:
description: 'Silences the action output preventing it from displaying git messages.'
required: false
attempt-limit:
description: 'How many rebase attempts to make before suspending the job. This option defaults to `3` and may be useful to increase when there are multiple deployments in a single branch.'
required: false
default: 3
outputs:
deployment-status:
description: 'The status of the deployment that indicates if the run failed or passed. Possible outputs include: success|failed|skipped'
Action ID: marketplace/peter-evans/dockerhub-description
Author: Peter Evans
Publisher: peter-evans
Repository: github.com/peter-evans/dockerhub-description
An action to update a Docker Hub repository description from README.md
| Name | Required | Description |
|---|---|---|
username |
Required | Docker Hub username |
password |
Required | Docker Hub password or Personal Access Token with read/write/delete scope |
repository |
Optional | Docker Hub repository in the format `<namespace>/<name>` Default: `github.repository` |
short-description |
Optional | Docker Hub repository short description |
readme-filepath |
Optional | Path to the repository readme Default: `./README.md` |
enable-url-completion |
Optional | Enables completion of relative URLs to absolute ones Default: `false` |
image-extensions |
Optional | File extensions that will be treated as images Default: `bmp,gif,jpg,jpeg,png,svg,webp` |
name: 'Docker Hub Description'
author: 'Peter Evans'
description: 'An action to update a Docker Hub repository description from README.md'
inputs:
username:
description: Docker Hub username
required: true
password:
description: Docker Hub password or Personal Access Token with read/write/delete scope
required: true
repository:
description: >
Docker Hub repository in the format `<namespace>/<name>`
Default: `github.repository`
short-description:
description: Docker Hub repository short description
readme-filepath:
description: >
Path to the repository readme
Default: `./README.md`
enable-url-completion:
description: >
Enables completion of relative URLs to absolute ones
Default: `false`
image-extensions:
description: >
File extensions that will be treated as images
Default: `bmp,gif,jpg,jpeg,png,svg,webp`
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'upload'
color: 'blue'
Action ID: marketplace/samuelmeuli/lint-action
Author: Samuel Meuli
Publisher: samuelmeuli
Repository: github.com/samuelmeuli/lint-action
GitHub Action for detecting and fixing linting errors
| Name | Required | Description |
|---|---|---|
github_token |
Optional | The GitHub token used to authenticated with GitHub. Default: ${{ github.token }} |
continue_on_error |
Optional | Whether the workflow run should also fail when linter failures are detected Default: true |
auto_fix |
Optional | Whether linters should try to fix code style issues automatically Default: false |
commit |
Optional | Whether to commit and push the changes made by auto_fix Default: true |
git_no_verify |
Optional | Bypass the pre-commit and pre-push git hooks Default: false |
git_name |
Optional | Username for auto-fix commits Default: Lint Action |
git_email |
Optional | Email address for auto-fix commits Default: lint-action@samuelmeuli.com |
commit_message |
Optional | Template for auto-fix commit messages. The "${linter}" variable can be used to insert the name of the linter which has created the auto-fix Default: Fix code style issues with ${linter} |
check_name |
Optional | Template for the name of the check run. The "${linter}" and "${dir}" variables can be used to insert the name and directory of the linter. Default: ${linter} |
neutral_check_on_warning |
Optional | Whether the check run should conclude with a neutral status instead of success when the linter finds only warnings Default: false |
stylelint |
Optional | Enable or disable stylelint checks Default: false |
stylelint_args |
Optional | Additional arguments to pass to the linter |
stylelint_dir |
Optional | Directory where the stylelint command should be run |
stylelint_extensions |
Optional | Extensions of files to check with stylelint Default: css |
stylelint_command_prefix |
Optional | Shell command to prepend to the linter command |
stylelint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
gofmt |
Optional | Enable or disable gofmt checks Default: false |
gofmt_args |
Optional | Additional arguments to pass to the linter |
gofmt_dir |
Optional | Directory where the gofmt command should be run |
gofmt_extensions |
Optional | Extensions of files to check with gofmt Default: go |
gofmt_command_prefix |
Optional | Shell command to prepend to the linter command |
gofmt_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
golint |
Optional | Enable or disable golint checks Default: false |
golint_args |
Optional | Additional arguments to pass to the linter |
golint_dir |
Optional | Directory where the golint command should be run |
golint_extensions |
Optional | Extensions of files to check with golint Default: go |
golint_command_prefix |
Optional | Shell command to prepend to the linter command |
golint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
eslint |
Optional | Enable or disable ESLint checks Default: false |
eslint_args |
Optional | Additional arguments to pass to the linter |
eslint_dir |
Optional | Directory where the ESLint command should be run |
eslint_extensions |
Optional | Extensions of files to check with ESLint Default: js |
eslint_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
eslint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
prettier |
Optional | Enable or disable Prettier checks Default: false |
prettier_args |
Optional | Additional arguments to pass to the linter |
prettier_dir |
Optional | Directory where the Prettier command should be run |
prettier_extensions |
Optional | Extensions of files to check with Prettier Default: css,html,js,json,jsx,md,sass,scss,ts,tsx,vue,yaml,yml |
prettier_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
prettier_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
xo |
Optional | Enable or disable XO checks Default: false |
xo_args |
Optional | Additional arguments to pass to the linter |
xo_dir |
Optional | Directory where the XO command should be run |
xo_extensions |
Optional | Extensions of files to check with XO Default: js |
xo_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
xo_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
tsc |
Optional | Enable or disable TypeScript checks Default: false |
tsc_args |
Optional | Additional arguments to pass to the linter |
tsc_dir |
Optional | Directory where the TSC command should be run |
tsc_extensions |
Optional | Extensions of files to check with TSC Default: ts |
tsc_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
tsc_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
php_codesniffer |
Optional | Enable or disable PHP_CodeSniffer checks Default: false |
php_codesniffer_args |
Optional | Additional arguments to pass to the linter |
php_codesniffer_dir |
Optional | Directory where the PHP_CodeSniffer command should be run |
php_codesniffer_extensions |
Optional | Extensions of files to check with PHP_CodeSniffer Default: php |
php_codesniffer_command_prefix |
Optional | Shell command to prepend to the linter command |
php_codesniffer_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
autopep8 |
Optional | Enable or disable autopep8 checks Default: false |
autopep8_args |
Optional | Additional arguments to pass to the linter |
autopep8_dir |
Optional | Directory where the autopep8 command should be run |
autopep8_extensions |
Optional | Extensions of files to check with autopep8 Default: py |
autopep8_command_prefix |
Optional | Shell command to prepend to the linter command |
autopep8_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
black |
Optional | Enable or disable Black checks Default: false |
black_args |
Optional | Additional arguments to pass to the linter |
black_dir |
Optional | Directory where the Black command should be run |
black_extensions |
Optional | Extensions of files to check with Black Default: py |
black_command_prefix |
Optional | Shell command to prepend to the linter command |
black_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
clang_format |
Optional | Enable or disable ClangFormat checks Default: false |
clang_format_args |
Optional | Additional arguments to pass to the linter |
clang_format_dir |
Optional | Directory where the ClangFormat command should be run |
clang_format_extensions |
Optional | Extensions of files to check with ClangFormat Default: c,cc,cpp,h,hpp,m,mm |
clang_format_command_prefix |
Optional | Shell command to prepend to the linter command |
clang_format_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
flake8 |
Optional | Enable or disable Flake8 checks Default: false |
flake8_args |
Optional | Additional arguments to pass to the linter |
flake8_dir |
Optional | Directory where the Flake8 command should be run |
flake8_extensions |
Optional | Extensions of files to check with Flake8 Default: py |
flake8_command_prefix |
Optional | Shell command to prepend to the linter command |
flake8_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
mypy |
Optional | Enable or disable Mypy checks Default: false |
mypy_args |
Optional | Additional arguments to pass to the linter |
mypy_dir |
Optional | Directory where the Mypy command should be run |
mypy_extensions |
Optional | Extensions of files to check with Mypy Default: py |
mypy_command_prefix |
Optional | Shell command to prepend to the linter command |
mypy_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
oitnb |
Optional | Enable or disable oitnb checks Default: false |
oitnb_args |
Optional | Additional arguments to pass to the linter |
oitnb_dir |
Optional | Directory where the oitnb command should be run |
oitnb_extensions |
Optional | Extensions of files to check with oitnb Default: py |
oitnb_command_prefix |
Optional | Shell command to prepend to the linter command |
oitnb_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
pylint |
Optional | Enable or disable Pylint checks Default: false |
pylint_args |
Optional | Additional arguments to pass to the linter |
pylint_dir |
Optional | Directory where the Pylint command should be run |
pylint_extensions |
Optional | Extensions of files to check with Pylint Default: py |
pylint_command_prefix |
Optional | Shell command to prepend to the linter command |
pylint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
rubocop |
Optional | Enable or disable RuboCop checks Default: false |
rubocop_args |
Optional | Additional arguments to pass to the linter |
rubocop_dir |
Optional | Directory where the RuboCop command should be run |
rubocop_extensions |
Optional | Extensions of files to check with RuboCop Default: rb |
rubocop_command_prefix |
Optional | Shell command to prepend to the linter command |
rubocop_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
erblint |
Optional | Enable or disable ERB Lint checks Default: false |
erblint_args |
Optional | Additional arguments to pass to the linter |
erblint_dir |
Optional | Directory where the ERB Lint command should be run |
erblint_extensions |
Optional | Extensions of files to check with ERB Lint Default: erb |
erblint_command_prefix |
Optional | Shell command to prepend to the linter command |
erblint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
clippy |
Optional | Enable or disable clippy Default: false |
clippy_args |
Optional | Additional arguments to pass to the linter |
clippy_dir |
Optional | Directory where the RuboCop command should be run |
clippy_extensions |
Optional | Extensions of files to check with RuboCop Default: rs |
clippy_command_prefix |
Optional | Shell command to prepend to the linter command |
clippy_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
rustfmt |
Optional | Enable or disable rustfmt Default: false |
rustfmt_args |
Optional | Additional arguments to pass to the linter Default: -- --color=never |
rustfmt_extensions |
Optional | Extensions of files to check with rustfmt Default: rs |
rustfmt_dir |
Optional | Directory where the rustfmt command should be run |
rustfmt_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swiftformat |
Optional | Enable or disable SwiftFormat checks Default: false |
swiftformat_args |
Optional | Additional arguments to pass to the linter |
swiftformat_dir |
Optional | Directory where the SwiftFormat command should be run |
swiftformat_extensions |
Optional | Extensions of files to check with SwiftFormat Default: swift |
swiftformat_command_prefix |
Optional | Shell command to prepend to the linter command |
swiftformat_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swift_format_lockwood |
Optional | Enable or disable SwiftFormat checks Default: false |
swift_format_lockwood_args |
Optional | Additional arguments to pass to the linter |
swift_format_lockwood_dir |
Optional | Directory where the SwiftFormat command should be run |
swift_format_lockwood_extensions |
Optional | Extensions of files to check with SwiftFormat Default: swift |
swift_format_lockwood_command_prefix |
Optional | Shell command to prepend to the linter command |
swift_format_lockwood_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swift_format_official |
Optional | Enable or disable swift-format checks Default: false |
swift_format_official_args |
Optional | Additional arguments to pass to the linter |
swift_format_official_dir |
Optional | Directory where the swift-format command should be run |
swift_format_official_extensions |
Optional | Extrensions of files to check with swift-format Default: swift |
swift_format_official_command_prefix |
Optional | Shell command to prepend to the linter command |
swift_format_official_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swiftlint |
Optional | Enable or disable SwiftLint checks Default: false |
swiftlint_args |
Optional | Additional arguments to pass to the linter |
swiftlint_dir |
Optional | Directory where the SwiftLint command should be run |
swiftlint_extensions |
Optional | Extensions of files to check with SwiftLint Default: swift |
swiftlint_command_prefix |
Optional | Shell command to prepend to the linter command |
swiftlint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
dotnet_format |
Optional | Enable or disable dotnet-format checks Default: false |
dotnet_format_args |
Optional | Additional arguments to pass to the linter |
dotnet_format_dir |
Optional | Directory where the dotnet-format command should be run |
dotnet_format_extensions |
Optional | Extensions of files to check with dotnet-format Default: cs |
dotnet_format_command_prefix |
Optional | Shell command to prepend to the linter command |
dotnet_format_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
name: Lint Action
author: Samuel Meuli
description: GitHub Action for detecting and fixing linting errors
inputs:
github_token:
description: The GitHub token used to authenticated with GitHub.
required: false
default: ${{ github.token }}
continue_on_error:
description: Whether the workflow run should also fail when linter failures are detected
required: false
default: "true"
auto_fix:
description: Whether linters should try to fix code style issues automatically
required: false
default: "false"
commit:
description: Whether to commit and push the changes made by auto_fix
required: false
default: "true"
git_no_verify:
description: Bypass the pre-commit and pre-push git hooks
required: false
default: "false"
git_name:
description: Username for auto-fix commits
required: false
default: Lint Action
git_email:
description: Email address for auto-fix commits
required: false
default: "lint-action@samuelmeuli.com"
commit_message:
description: 'Template for auto-fix commit messages. The "${linter}" variable can be used to insert the name of the linter which has created the auto-fix'
required: false
default: "Fix code style issues with ${linter}"
check_name:
description: 'Template for the name of the check run. The "${linter}" and "${dir}" variables can be used to insert the name and directory of the linter.'
required: false
default: "${linter}"
neutral_check_on_warning:
description: Whether the check run should conclude with a neutral status instead of success when the linter finds only warnings
required: false
default: "false"
# CSS
stylelint:
description: Enable or disable stylelint checks
required: false
default: "false"
stylelint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
stylelint_dir:
description: Directory where the stylelint command should be run
required: false
stylelint_extensions:
description: Extensions of files to check with stylelint
required: false
default: "css"
stylelint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
stylelint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
# Go
gofmt:
description: Enable or disable gofmt checks
required: false
default: "false"
gofmt_args:
description: Additional arguments to pass to the linter
required: false
default: ""
gofmt_dir:
description: Directory where the gofmt command should be run
required: false
gofmt_extensions:
description: Extensions of files to check with gofmt
required: false
default: "go"
gofmt_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
gofmt_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
golint:
description: Enable or disable golint checks
required: false
default: "false"
golint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
golint_dir:
description: Directory where the golint command should be run
required: false
golint_extensions:
description: Extensions of files to check with golint
required: false
default: "go"
golint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
golint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# JavaScript
eslint:
description: Enable or disable ESLint checks
required: false
default: "false"
eslint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
eslint_dir:
description: Directory where the ESLint command should be run
required: false
eslint_extensions:
description: Extensions of files to check with ESLint
required: false
default: "js"
eslint_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
eslint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
prettier:
description: Enable or disable Prettier checks
required: false
default: "false"
prettier_args:
description: Additional arguments to pass to the linter
required: false
default: ""
prettier_dir:
description: Directory where the Prettier command should be run
required: false
prettier_extensions:
description: Extensions of files to check with Prettier
required: false
default: "css,html,js,json,jsx,md,sass,scss,ts,tsx,vue,yaml,yml"
prettier_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
prettier_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
xo:
description: Enable or disable XO checks
required: false
default: "false"
xo_args:
description: Additional arguments to pass to the linter
required: false
default: ""
xo_dir:
description: Directory where the XO command should be run
required: false
xo_extensions:
description: Extensions of files to check with XO
required: false
default: "js"
xo_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
xo_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
# TypeScript
tsc:
description: Enable or disable TypeScript checks
required: false
default: "false"
tsc_args:
description: Additional arguments to pass to the linter
required: false
default: ""
tsc_dir:
description: Directory where the TSC command should be run
required: false
tsc_extensions:
description: Extensions of files to check with TSC
required: false
default: "ts"
tsc_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
tsc_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# PHP
php_codesniffer:
description: Enable or disable PHP_CodeSniffer checks
required: false
default: "false"
php_codesniffer_args:
description: Additional arguments to pass to the linter
required: false
default: ""
php_codesniffer_dir:
description: Directory where the PHP_CodeSniffer command should be run
required: false
php_codesniffer_extensions:
description: Extensions of files to check with PHP_CodeSniffer
required: false
default: "php"
php_codesniffer_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
php_codesniffer_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# Python
autopep8:
description: Enable or disable autopep8 checks
required: false
default: "false"
autopep8_args:
description: Additional arguments to pass to the linter
required: false
default: ""
autopep8_dir:
description: Directory where the autopep8 command should be run
required: false
autopep8_extensions:
description: Extensions of files to check with autopep8
required: false
default: "py"
autopep8_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
autopep8_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
black:
description: Enable or disable Black checks
required: false
default: "false"
black_args:
description: Additional arguments to pass to the linter
required: false
default: ""
black_dir:
description: Directory where the Black command should be run
required: false
black_extensions:
description: Extensions of files to check with Black
required: false
default: "py"
black_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
black_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
clang_format:
description: Enable or disable ClangFormat checks
required: false
default: "false"
clang_format_args:
description: Additional arguments to pass to the linter
required: false
default: ""
clang_format_dir:
description: Directory where the ClangFormat command should be run
required: false
clang_format_extensions:
description: Extensions of files to check with ClangFormat
required: false
default: "c,cc,cpp,h,hpp,m,mm"
clang_format_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
clang_format_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
flake8:
description: Enable or disable Flake8 checks
required: false
default: "false"
flake8_args:
description: Additional arguments to pass to the linter
required: false
default: ""
flake8_dir:
description: Directory where the Flake8 command should be run
required: false
flake8_extensions:
description: Extensions of files to check with Flake8
required: false
default: "py"
flake8_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
flake8_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
mypy:
description: Enable or disable Mypy checks
required: false
default: "false"
mypy_args:
description: Additional arguments to pass to the linter
required: false
default: ""
mypy_dir:
description: Directory where the Mypy command should be run
required: false
mypy_extensions:
description: Extensions of files to check with Mypy
required: false
default: "py"
mypy_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
mypy_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
oitnb:
description: Enable or disable oitnb checks
required: false
default: "false"
oitnb_args:
description: Additional arguments to pass to the linter
required: false
default: ""
oitnb_dir:
description: Directory where the oitnb command should be run
required: false
oitnb_extensions:
description: Extensions of files to check with oitnb
required: false
default: "py"
oitnb_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
oitnb_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
pylint:
description: Enable or disable Pylint checks
required: false
default: "false"
pylint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
pylint_dir:
description: Directory where the Pylint command should be run
required: false
pylint_extensions:
description: Extensions of files to check with Pylint
required: false
default: "py"
pylint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
pylint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# Ruby
rubocop:
description: Enable or disable RuboCop checks
required: false
default: "false"
rubocop_args:
description: Additional arguments to pass to the linter
required: false
default: ""
rubocop_dir:
description: Directory where the RuboCop command should be run
required: false
rubocop_extensions:
description: Extensions of files to check with RuboCop
required: false
default: "rb"
rubocop_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
rubocop_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
erblint:
description: Enable or disable ERB Lint checks
required: false
default: "false"
erblint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
erblint_dir:
description: Directory where the ERB Lint command should be run
required: false
erblint_extensions:
description: Extensions of files to check with ERB Lint
required: false
default: "erb"
erblint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
erblint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# Rust
clippy:
description: Enable or disable clippy
required: false
default: "false"
clippy_args:
description: Additional arguments to pass to the linter
required: false
default: ""
clippy_dir:
description: Directory where the RuboCop command should be run
required: false
clippy_extensions:
description: Extensions of files to check with RuboCop
required: false
default: "rs"
clippy_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
clippy_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
rustfmt:
description: Enable or disable rustfmt
required: false
default: "false"
rustfmt_args:
description: Additional arguments to pass to the linter
required: false
default: "-- --color=never"
rustfmt_extensions:
description: Extensions of files to check with rustfmt
required: false
default: "rs"
rustfmt_dir:
description: Directory where the rustfmt command should be run
required: false
rustfmt_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
# Swift
# Alias of `swift_format_lockwood` (for backward compatibility)
# TODO: Remove alias in v2
swiftformat:
description: Enable or disable SwiftFormat checks
required: false
default: "false"
swiftformat_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swiftformat_dir:
description: Directory where the SwiftFormat command should be run
required: false
swiftformat_extensions:
description: Extensions of files to check with SwiftFormat
required: false
default: "swift"
swiftformat_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swiftformat_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
swift_format_lockwood:
description: Enable or disable SwiftFormat checks
required: false
default: "false"
swift_format_lockwood_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swift_format_lockwood_dir:
description: Directory where the SwiftFormat command should be run
required: false
swift_format_lockwood_extensions:
description: Extensions of files to check with SwiftFormat
required: false
default: "swift"
swift_format_lockwood_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swift_format_lockwood_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
swift_format_official:
description: Enable or disable swift-format checks
required: false
default: "false"
swift_format_official_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swift_format_official_dir:
description: Directory where the swift-format command should be run
required: false
swift_format_official_extensions:
description: Extrensions of files to check with swift-format
required: false
default: "swift"
swift_format_official_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swift_format_official_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
swiftlint:
description: Enable or disable SwiftLint checks
required: false
default: "false"
swiftlint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swiftlint_dir:
description: Directory where the SwiftLint command should be run
required: false
swiftlint_extensions:
description: Extensions of files to check with SwiftLint
required: false
default: "swift"
swiftlint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swiftlint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
dotnet_format:
description: Enable or disable dotnet-format checks
required: false
default: "false"
dotnet_format_args:
description: Additional arguments to pass to the linter
required: false
default: ""
dotnet_format_dir:
description: Directory where the dotnet-format command should be run
required: false
dotnet_format_extensions:
description: Extensions of files to check with dotnet-format
required: false
default: "cs"
dotnet_format_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
dotnet_format_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
runs:
using: node20
main: ./dist/index.js
branding:
icon: check
color: green
Action ID: marketplace/actions/labeler
Author: GitHub
Publisher: actions
Repository: github.com/actions/labeler
Automatically label new pull requests based on the paths of files being changed
| Name | Required | Description |
|---|---|---|
repo-token |
Optional | The GitHub token used to manage labels Default: ${{ github.token }} |
configuration-path |
Optional | The path for the label configurations Default: .github/labeler.yml |
sync-labels |
Optional | Whether or not to remove labels when matching files are reverted |
dot |
Optional | Whether or not to auto-include paths starting with dot (e.g. `.github`) Default: True |
pr-number |
Optional | The pull request number(s) |
| Name | Description |
|---|---|
new-labels |
A comma-separated list of all new labels |
all-labels |
A comma-separated list of all labels that the PR contains |
name: 'Labeler'
description: 'Automatically label new pull requests based on the paths of files being changed'
author: 'GitHub'
inputs:
repo-token:
description: 'The GitHub token used to manage labels'
required: false
default: ${{ github.token }}
configuration-path:
description: 'The path for the label configurations'
default: '.github/labeler.yml'
required: false
sync-labels:
description: 'Whether or not to remove labels when matching files are reverted'
default: false
required: false
dot:
description: 'Whether or not to auto-include paths starting with dot (e.g. `.github`)'
default: true
required: false
pr-number:
description: 'The pull request number(s)'
required: false
outputs:
new-labels:
description: 'A comma-separated list of all new labels'
all-labels:
description: 'A comma-separated list of all labels that the PR contains'
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/azure/Sample-Chat-Completion-OpenAI-Infra
Author: Unknown
Publisher: azure
Repository: github.com/azure/Sample-Chat-Completion-OpenAI-Infra
This action helps deploy Infrastructure to Azure that helps achieve a chat-completion experience powered by Open AI
| Name | Required | Description |
|---|---|---|
location |
Required | Location to deploy resources to |
env-name |
Required | Name of environment where env values are set |
openai-location |
Optional | Location to deploy OpenAI resources to |
documentintelligence-location |
Optional | |
bring-your-own-data |
Optional | Boolean to indicate if you are bringing your own data |
data-path |
Optional | Path to data to be used to train the OpenAI model |
additional-args |
Optional | Additional arguments to be passed to the deployment script |
name: 'Azure Infrastructure for Open AI powered Chat'
description: 'This action helps deploy Infrastructure to Azure that helps achieve a chat-completion experience powered by Open AI'
branding:
icon: 'play-circle'
color: 'blue'
inputs:
location:
description: 'Location to deploy resources to'
required: true
env-name:
description: 'Name of environment where env values are set'
required: true
openai-location:
description: 'Location to deploy OpenAI resources to'
required: false
documentintelligence-location:
decsription: 'Location to deploy Document Intelligence resources to'
required: false
bring-your-own-data:
description: 'Boolean to indicate if you are bringing your own data'
required: false
default: false
data-path:
description: 'Path to data to be used to train the OpenAI model'
required: false
default: ""
additional-args:
description: 'Additional arguments to be passed to the deployment script'
required: false
runs:
using: 'composite'
steps:
- name: 'Checkout master'
uses: actions/checkout@v3
- name: 'az cli login'
uses: azure/login@v2
with:
client-id: ${{ env.AZURE_CLIENT_ID }}
tenant-id: ${{ env.AZURE_TENANT_ID }}
subscription-id: ${{ env.AZURE_SUBSCRIPTION_ID }}
enable-AzPSSession: true
- name: 'Install azd'
uses: Azure/setup-azd@v0.1.0
- name: 'Install Nodejs'
uses: actions/setup-node@v4
with:
node-version: 20
- name: Az powershell
uses: azure/powershell@v2
env:
BRING_YOUR_OWN_DATA: ${{ inputs.bring-your-own-data }}
INPUT_ARGS: ${{ inputs.additional-args }}
with:
inlineScript: |
# Create and initialize AZD project
dir
mkdir orig-ai-repo
cd orig-ai-repo
azd init -t Azure-Samples/azure-search-openai-demo -e ${{ inputs.env-name }}
# modify main.parameters.json in template to take in open ai location as env var
$filePath = "infra/main.parameters.json"
$json = Get-Content -Raw -Path $filePath | ConvertFrom-Json
$openAiEnvVar = @{
value = '${AZURE_OPENAI_LOCATION}'
}
if ($json.parameters) {
$json.parameters | Add-Member -Name "openAiResourceGroupLocation" -Value $openAiEnvVar -MemberType NoteProperty
}
$json | ConvertTo-Json -Depth 10 | Set-Content -Path $filePath
# Get permissions
azd auth login --client-id ${{ env.AZURE_CLIENT_ID }} --federated-credential-provider "github" --tenant-id ${{ env.AZURE_TENANT_ID }}
Get-AzAccessToken -ResourceUrl "https://vault.azure.net"
# Set requied environment variables
azd env set AZURE_SUBSCRIPTION_ID ${{ env.AZURE_SUBSCRIPTION_ID }}
azd env set AZURE_PRINCIPAL_ID ${{ env.AZURE_PRINCIPAL_ID}}
# azd env set AZURE_RESOURCE_GROUP ${{ env.AZURE_RG }}
# Set environment variables to answer prompts
azd env set AZURE_LOCATION ${{ inputs.location }}
# Set open ai location if provided, else default to rg location
if ([string]::IsNullOrEmpty("${{ inputs.openai-location }}")) {
azd env set AZURE_OPENAI_LOCATION ${{ inputs.location }}
} else {
azd env set AZURE_OPENAI_LOCATION ${{ inputs.openai-location }}
}
# Set document intelligence location if provided, else default to rg location
if ([string]::IsNullOrEmpty("${{ inputs.documentintelligence-location }}")) {
azd env set AZURE_DOCUMENTINTELLIGENCE_LOCATION ${{ inputs.location }}
} else {
azd env set AZURE_DOCUMENTINTELLIGENCE_LOCATION ${{ inputs.documentintelligence-location }}
}
# Set additional environment variables
if (-not [string]::IsNullOrEmpty($env:INPUT_ARGS)) {
$jsonObject = ConvertFrom-Json -InputObject $env:INPUT_ARGS -AsHashtable
foreach ($key in $jsonObject.Keys) {
$value = $jsonObject[$key]
azd env set $key $value
}
}
azd env get-values
# Bring your own data
$bringYourOwnData = [System.Convert]::ToBoolean($env:BRING_YOUR_OWN_DATA)
if ($bringYourOwnData -eq $true) {
Write-Output "The bring-your-own-data input is true."
if (Test-Path data) {
Remove-Item data -Recurse -Force
}
if (Test-Path ../${{ inputs.data-path }}) {
Copy-Item ../${{ inputs.data-path }} data -Recurse
}
}
#Start the deployment
azd up --no-prompt -e ${{ inputs.env-name }}
azPSVersion: "latest"
Action ID: marketplace/axel-op/package-java-agnostic-serverless-function
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/package-java-agnostic-serverless-function
Package an Agnostic Serverless Function in Java to deploy it on a specific FaaS provider
| Name | Required | Description |
|---|---|---|
faas-provider |
Required | The provider to package the function for. Possible values are: aws, azure, gcloud. |
function-name |
Required | The name of the function to be packaged |
working-directory |
Optional | The root directory of the function Default: . |
| Name | Description |
|---|---|
deployment-directory |
The directory containing the files to be deployed |
deployment-file |
The zip (for gcloud and azure) or jar (for aws) file to be deployed |
name: "Package Java Agnostic Serverless Functions"
description: "Package an Agnostic Serverless Function in Java to deploy it on a specific FaaS provider"
branding:
icon: package
color: red
inputs:
faas-provider:
description: "The provider to package the function for. Possible values are: aws, azure, gcloud."
required: true
function-name:
description: "The name of the function to be packaged"
required: true
working-directory:
description: "The root directory of the function"
required: false
default: "."
outputs:
deployment-directory:
description: "The directory containing the files to be deployed"
value: ${{ steps.package-aws.outputs.deployment-directory || steps.package-gcloud.outputs.deployment-directory || steps.package-azure.outputs.deployment-directory }}
deployment-file:
description: "The zip (for gcloud and azure) or jar (for aws) file to be deployed"
value: ${{ steps.package-aws.outputs.deployment-file || steps.package-gcloud.outputs.deployment-file || steps.package-azure.outputs.deployment-file }}
runs:
using: "composite"
steps:
- name: Package AWS Lambda
if: ${{ inputs.faas-provider == 'aws' }}
id: package-aws
uses: axel-op/package-java-aws-lambda@main
with:
working-directory: ${{ inputs.working-directory }}
env:
DEPLOYMENT_DIR: deployment
- name: Package Google Cloud Function
if: ${{ inputs.faas-provider == 'gcloud' }}
id: package-gcloud
uses: axel-op/package-java-google-cloud-function@main
with:
working-directory: ${{ inputs.working-directory }}
env:
DEPLOYMENT_DIR: deployment
- name: Package Azure Function app
if: ${{ inputs.faas-provider == 'azure' }}
id: package-azure
uses: axel-op/package-java-azure-function@main
with:
working-directory: ${{ inputs.working-directory }}
function-name: ${{ inputs.function-name }}
env:
DEPLOYMENT_DIR: deployment
Action ID: marketplace/actions/first-interaction
Author: GitHub
Publisher: actions
Repository: github.com/actions/first-interaction
Greet first-time contributors when they open an issue or PR
| Name | Required | Description |
|---|---|---|
issue_message |
Optional | Comment to post on an individual's first issue |
pr_message |
Optional | Comment to post on an individual's first pull request |
repo_token |
Required | Token with permissions to post issue and PR comments Default: ${{ github.token }} |
name: First Interaction
description: Greet first-time contributors when they open an issue or PR
author: GitHub
inputs:
issue_message:
description: Comment to post on an individual's first issue
pr_message:
description: Comment to post on an individual's first pull request
repo_token:
description: Token with permissions to post issue and PR comments
required: true
default: ${{ github.token }}
runs:
using: node24
main: dist/index.js
Action ID: marketplace/DoozyX/clang-format-lint-action
Author: Slobodan Kletnikov (DoozyX)
Publisher: DoozyX
Repository: github.com/DoozyX/clang-format-lint-action
Github Action that check if code is formatted correctly using clang-format
| Name | Required | Description |
|---|---|---|
source |
Optional | Source folder to check formatting Default: . |
exclude |
Optional | Folder to exclude from formatting check Default: none |
extensions |
Optional | List of extensions to check Default: c,h,C,H,cpp,hpp,cc,hh,c++,h++,cxx,hxx |
clangFormatVersion |
Optional | Version of clang-format Default: 18 |
style |
Optional | Formatting style to use Default: file |
inplace |
Optional | Just fix files (`clang-format -i`) instead of returning a diff |
name: 'clang-format lint'
author: 'Slobodan Kletnikov (DoozyX)'
description: 'Github Action that check if code is formatted correctly using clang-format'
branding:
icon: 'align-left'
color: 'green'
inputs:
source:
description: 'Source folder to check formatting'
required: false
default: '.'
exclude:
description: 'Folder to exclude from formatting check'
required: false
default: 'none'
extensions:
description: 'List of extensions to check'
required: false
default: 'c,h,C,H,cpp,hpp,cc,hh,c++,h++,cxx,hxx'
clangFormatVersion:
description: 'Version of clang-format'
required: false
default: '18'
style:
description: 'Formatting style to use'
required: false
default: 'file'
inplace:
description: 'Just fix files (`clang-format -i`) instead of returning a diff'
required: false
default: False
runs:
using: 'docker'
image: 'Dockerfile'
args:
- --clang-format-executable
- /clang-format/clang-format${{ inputs.clangFormatVersion }}
- -r
- --color
- always
- --style
- ${{ inputs.style }}
- --inplace
- ${{ inputs.inplace }}
- --extensions
- ${{ inputs.extensions }}
- --exclude
- ${{ inputs.exclude }}
- ${{ inputs.source }}
Action ID: marketplace/mislav/bump-homebrew-formula-action
Author: @mislav
Publisher: mislav
Repository: github.com/mislav/bump-homebrew-formula-action
Bump Homebrew formula after a new release
| Name | Required | Description |
|---|---|---|
formula-name |
Optional | The name of the Homebrew formula (defaults to lower-cased repository name) |
formula-path |
Optional | The path to the Homebrew formula file (defaults to `Formula/<formula-name>.rb`) |
tag-name |
Optional | The git tag name to bump the formula to (defaults to the currently pushed tag) |
download-url |
Optional | The package download URL for the Homebrew formula (defaults to the release tarball) |
download-sha256 |
Optional | The SHA256 checksum of the archive at download-url (defaults to calculating it) |
homebrew-tap |
Optional | The repository where the formula should be updated Default: Homebrew/homebrew-core |
push-to |
Optional | An existing fork of the homebrew-tap repository where the edit should be pushed to (defaults to creating or reusing a personal fork) |
base-branch |
Optional | The branch name in the homebrew-tap repository to update the formula in |
create-pullrequest |
Optional | Set to a boolean value to either force or prohibit making a pull request to homebrew-tap |
create-branch |
Optional | Set to a boolean value to either force or prohibit creating a separate branch on homebrew-tap |
commit-message |
Optional | The git commit message template to use when updating the formula Default: {{formulaName}} {{version}}
Created by https://github.com/mislav/bump-homebrew-formula-action
|
name: bump-homebrew-formula
description: "Bump Homebrew formula after a new release"
author: "@mislav"
runs:
using: node20
main: "./lib/index.js"
inputs:
formula-name:
description: The name of the Homebrew formula (defaults to lower-cased repository name)
formula-path:
description: The path to the Homebrew formula file (defaults to `Formula/<formula-name>.rb`)
tag-name:
description: The git tag name to bump the formula to (defaults to the currently pushed tag)
download-url:
description: The package download URL for the Homebrew formula (defaults to the release tarball)
download-sha256:
description: The SHA256 checksum of the archive at download-url (defaults to calculating it)
homebrew-tap:
description: The repository where the formula should be updated
default: Homebrew/homebrew-core
push-to:
description: An existing fork of the homebrew-tap repository where the edit should be pushed to (defaults to creating or reusing a personal fork)
base-branch:
description: The branch name in the homebrew-tap repository to update the formula in
create-pullrequest:
description: Set to a boolean value to either force or prohibit making a pull request to homebrew-tap
create-branch:
description: Set to a boolean value to either force or prohibit creating a separate branch on homebrew-tap
commit-message:
description: The git commit message template to use when updating the formula
default: |
{{formulaName}} {{version}}
Created by https://github.com/mislav/bump-homebrew-formula-action
branding:
icon: box
color: orange
Action ID: marketplace/azure/setup-azd
Author: Azure Developer CLI Team
Publisher: azure
Repository: github.com/azure/setup-azd
This action downloads and installs azd
| Name | Required | Description |
|---|---|---|
version |
Optional | The version of azd to install (default: latest) Default: latest |
name: 'setup-azd'
description: 'This action downloads and installs azd'
author: 'Azure Developer CLI Team'
inputs:
version:
required: false
description: 'The version of azd to install (default: latest)'
default: 'latest'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/posener/goaction
Author: Unknown
Publisher: posener
Repository: github.com/posener/goaction
Creates action files for Go code
| Name | Required | Description |
|---|---|---|
path |
Required | Path to main Go main package. Default: . |
name |
Optional | Override action name, the default name is the package name. |
desc |
Optional | Override action description, the default description is the package synopsis. |
image |
Optional | Override Docker image to run the action with (See https://hub.docker.com/_/golang?tab=tags). Default: golang:1.14.2-alpine3.11 |
install |
Optional | Comma separated list of requirements to 'apk add'. |
icon |
Optional | Set branding icon. (See options at https://feathericons.com). |
color |
Optional | Set branding color. (white, yellow, blue, green, orange, red, purple or gray-dark). |
email |
Optional | Email for commit message. Default: posener@gmail.com |
GITHUB_TOKEN |
Optional | Github token for PR comments. Optional. |
# File generated by github.com/posener/goaction. DO NOT EDIT.
name: posener/goaction
description: "Creates action files for Go code"
inputs:
path:
default: .
description: "Path to main Go main package."
required: true
name:
description: "Override action name, the default name is the package name."
required: false
desc:
description: "Override action description, the default description is the package synopsis."
required: false
image:
default: golang:1.14.2-alpine3.11
description: "Override Docker image to run the action with (See https://hub.docker.com/_/golang?tab=tags)."
required: false
install:
description: "Comma separated list of requirements to 'apk add'."
required: false
icon:
description: "Set branding icon. (See options at https://feathericons.com)."
required: false
color:
description: "Set branding color. (white, yellow, blue, green, orange, red, purple or gray-dark)."
required: false
email:
default: posener@gmail.com
description: "Email for commit message."
required: false
GITHUB_TOKEN:
description: "Github token for PR comments. Optional."
required: false
runs:
using: docker
image: Dockerfile
env:
email: "${{ inputs.email }}"
GITHUB_TOKEN: "${{ inputs.GITHUB_TOKEN }}"
args:
- "-path=${{ inputs.path }}"
- "-name=${{ inputs.name }}"
- "-desc=${{ inputs.desc }}"
- "-image=${{ inputs.image }}"
- "-install=${{ inputs.install }}"
- "-icon=${{ inputs.icon }}"
- "-color=${{ inputs.color }}"
branding:
icon: activity
color: blue
Action ID: marketplace/github/ghas-jira-integration
Author: Unknown
Publisher: github
Repository: github.com/github/ghas-jira-integration
This helps sync GHAS alerts to JIRA by creating an issue for each alert.
| Name | Required | Description |
|---|---|---|
jira_url |
Required | URL of the JIRA instance |
jira_user |
Required | JIRA account with the required permissions |
jira_token |
Required | JIRA password or token |
jira_project |
Required | JIRA project key |
jira_labels |
Optional | JIRA bug label(s). (e.g. valid format can be "red-team,blue-team,green-team", or "red-team") This tool will split the values entered by commas. Spaces in the double quotes will be respected and saved. |
github_token |
Optional | GitHub API token with the required permissions Default: ${{ github.token }} |
sync_direction |
Optional | Which direction to synchronize in ("gh2jira", "jira2gh" or "both") Default: both |
issue_end_state |
Optional | Custom end state Default: Done |
issue_reopen_state |
Optional | Custom reopen state Default: To Do |
name: 'Sync GitHub Advanced Security and Jira'
description: "This helps sync GHAS alerts to JIRA by creating an
issue for each alert."
inputs:
jira_url:
description: 'URL of the JIRA instance'
required: true
jira_user:
description: 'JIRA account with the required permissions'
required: true
jira_token:
description: 'JIRA password or token'
required: true
jira_project:
description: 'JIRA project key'
required: true
jira_labels:
description: 'JIRA bug label(s). (e.g. valid format can be "red-team,blue-team,green-team", or "red-team")
This tool will split the values entered by commas. Spaces in the double quotes
will be respected and saved.'
required: false
github_token:
description: 'GitHub API token with the required permissions'
required: false
default: ${{ github.token }}
sync_direction:
description: 'Which direction to synchronize in ("gh2jira", "jira2gh" or "both")'
required: false
default: 'both'
issue_end_state:
description: 'Custom end state'
required: false
default: 'Done'
issue_reopen_state:
description: 'Custom reopen state'
required: false
default: 'To Do'
runs:
using: composite
steps:
- name: Run GitHub to Jira Sync
working-directory: ${{ github.action_path }}
shell: bash
env:
INPUTS_GITHUB_TOKEN: ${{ inputs.github_token }}
INPUTS_JIRA_URL: ${{ inputs.jira_url }}
INPUTS_JIRA_USER: ${{ inputs.jira_user }}
INPUTS_JIRA_TOKEN: ${{ inputs.jira_token }}
INPUTS_JIRA_PROJECT: ${{ inputs.jira_project }}
INPUTS_JIRA_LABELS: ${{ inputs.jira_labels }}
INPUTS_SYNC_DIRECTION: ${{ inputs.sync_direction }}
INPUTS_ISSUE_END_STATE: ${{ inputs.issue_end_state }}
INPUTS_ISSUE_REOPEN_STATE: ${{ inputs.issue_reopen_state }}
run: |
pip3 install pipenv
pipenv install
REPOSITORY_NAME="$(echo "$GITHUB_REPOSITORY" | cut -d/ -f 2)"
# Run pipenv from the temporary directory
pipenv run ./gh2jira sync \
--gh-url "$GITHUB_API_URL" \
--gh-token "$INPUTS_GITHUB_TOKEN" \
--gh-org "$GITHUB_REPOSITORY_OWNER" \
--gh-repo "$REPOSITORY_NAME" \
--jira-url "$INPUTS_JIRA_URL" \
--jira-user "$INPUTS_JIRA_USER" \
--jira-token "$INPUTS_JIRA_TOKEN" \
--jira-project "$INPUTS_JIRA_PROJECT" \
--jira-labels "$INPUTS_JIRA_LABELS" \
--direction "$INPUTS_SYNC_DIRECTION" \
--issue-end-state "$INPUTS_ISSUE_END_STATE" \
--issue-reopen-state "$INPUTS_ISSUE_REOPEN_STATE"
Action ID: marketplace/rhysd/actions-hugo
Author: peaceiris
Publisher: rhysd
Repository: github.com/rhysd/actions-hugo
GitHub Actions for Hugo ⚡️ Setup Hugo quickly and build your site fast. Hugo extended and Hugo Modules are supported.
| Name | Required | Description |
|---|---|---|
hugo-version |
Optional | The Hugo version to download (if necessary) and use. Example: 0.58.2 Default: latest |
extended |
Optional | Download (if necessary) and use Hugo extended version. Example: true Default: false |
name: 'Hugo setup'
description: 'GitHub Actions for Hugo ⚡️ Setup Hugo quickly and build your site fast. Hugo extended and Hugo Modules are supported.'
author: 'peaceiris'
inputs:
hugo-version:
description: 'The Hugo version to download (if necessary) and use. Example: 0.58.2'
required: false
default: 'latest'
extended:
description: 'Download (if necessary) and use Hugo extended version. Example: true'
required: false
default: 'false'
runs:
using: 'node20'
main: 'lib/index.js'
branding:
icon: 'package'
color: 'yellow'
Action ID: marketplace/srt32/yaaas
Author: Unknown
Publisher: srt32
Repository: github.com/srt32/yaaas
Greet someone and record the time
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Who to greet Default: World |
| Name | Description |
|---|---|
time |
The time we greeted you |
# action.yml
name: 'Hello Worldoworld!'
description: 'Greet someone and record the time'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: 'World'
outputs:
time: # id of output
description: 'The time we greeted you'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.who-to-greet }}
Action ID: marketplace/actions-hub/stylelint
Author: Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>
Publisher: actions-hub
Repository: github.com/actions-hub/stylelint
GitHub Action that runs stylelint.
name: 'stylelinter'
description: 'GitHub Action that runs stylelint.'
author: 'Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>'
branding:
icon: 'layout'
color: 'black'
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/julia-actions/install-juliaup
Author: julia-actions organization, contributors
Publisher: julia-actions
Repository: github.com/julia-actions/install-juliaup
Install Juliaup, add it to the PATH, and use it to install Julia
| Name | Required | Description |
|---|---|---|
channel |
Required | The Juliaup channel to install (and set as the default channel). |
internal-juliaup-version |
Optional | [private internal]. The Juliaup version to install. This is not part of the public API of this action. Default: 1.18.2 |
name: 'install-juliaup'
description: 'Install Juliaup, add it to the PATH, and use it to install Julia'
author: 'julia-actions organization, contributors'
inputs:
channel:
description: 'The Juliaup channel to install (and set as the default channel).'
required: true
internal-juliaup-version:
description: '[private internal]. The Juliaup version to install. This is not part of the public API of this action.'
required: false
default: '1.18.2' # Update this value whenever a new release of Juliaup is made.
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'download'
color: 'green'
Action ID: marketplace/wlixcc/pod-lib-update-action
Author: Unknown
Publisher: wlixcc
Repository: github.com/wlixcc/pod-lib-update-action
use this action auto generate new version of pod library
| Name | Required | Description |
|---|---|---|
spec_repo_url |
Required | Url of your spec repo |
spec_file_path |
Required | YouSpecFileName.podspec file path in your lib repo, relative path |
lint_args |
Optional | Spec lint args, such as --allow-warnings |
push_args |
Optional | Repo push args, such as --allow-warnings |
name: 'pod lib update'
description: 'use this action auto generate new version of pod library'
#https://help.github.com/en/actions/building-actions/metadata-syntax-for-github-actions
inputs:
spec_repo_url:
description: 'Url of your spec repo'
required: true
spec_file_path:
description: 'YouSpecFileName.podspec file path in your lib repo, relative path'
required: true
lint_args:
description: 'Spec lint args, such as --allow-warnings'
required: false
push_args:
description: 'Repo push args, such as --allow-warnings'
required: false
runs:
using: 'node12'
main: 'index.js'
branding:
icon: 'arrow-up'
color: 'purple'
Action ID: marketplace/JamesIves/github-sponsors-readme-action
Author: James Ives <iam@jamesiv.es>
Publisher: JamesIves
Repository: github.com/JamesIves/github-sponsors-readme-action
This GitHub Action will automatically add your GitHub Sponsors to your README.
| Name | Required | Description |
|---|---|---|
token |
Required | You must provide the action with a Personal Access Token (PAT) with either the user:read and org:read permission scope and store it in the secrets / with menu as a secret. This should be generated from the account or organization that receives sponsorship. [Learn more about creating and using encrypted secrets here.](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets) |
file |
Required | This should point to the file that you're generating, for example README.md or path/to/CREDITS.md |
organization |
Optional | If you're displaying sponsorship information as an organization you should toggle this option to true. You also need to provide the action with an org:read scoped PAT. |
minimum |
Optional | Using this input you can set the minimum sponsorship threshold. For example setting this to 500 will only display sponsors who give of $5 USD and more. By default the action will display all of your sponsors. |
maximum |
Optional | For example setting this to 1000 will only display sponsors who give of $10 USD and less. By default the action will display all of your sponsors. |
marker |
Optional | This allows you to modify the marker comment that is placed in your file. By default this is set to sponsors - <!-- sponsors --> <!-- sponsors -->, if you set this to gold for example you can place <!-- gold --> <!-- gold --> in your file. |
template |
Optional | Allows you to modify the default template. Please refer to the `template` section of this README for more information. |
fallback |
Optional | Allows you to specify a fallback if you have no sponsors. By default nothing is displayed. |
active-only |
Optional | If set to false, inactive sponsors will be displayed. This can be useful if you want to display all sponsors, regardless of their status. Default: true |
include-private |
Optional | If set to true, private sponsors will be displayed in the list, however any identifying information will be redacted. This can be useful if you want to display all sponsors, regardless of their privacy settings. Default: false |
| Name | Description |
|---|---|
sponsorshipStatus |
The status of the action that indicates if the run failed or passed. Possible outputs include: success|failed |
name: 'Add GitHub Sponsors to Readme'
description: 'This GitHub Action will automatically add your GitHub Sponsors to your README.'
author: 'James Ives <iam@jamesiv.es>'
runs:
using: 'node20'
main: 'lib/main.js'
branding:
icon: 'heart'
color: 'red'
inputs:
token:
description: >
You must provide the action with a Personal Access Token (PAT) with either the user:read and org:read permission scope and store it in the secrets / with menu as a secret.
This should be generated from the account or organization that receives sponsorship.
[Learn more about creating and using encrypted secrets here.](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets)
required: true
file:
description: This should point to the file that you're generating, for example README.md or path/to/CREDITS.md
required: true
organization:
description: If you're displaying sponsorship information as an organization you should toggle this option to true. You also need to provide the action with an org:read scoped PAT.
required: false
minimum:
description: Using this input you can set the minimum sponsorship threshold. For example setting this to 500 will only display sponsors who give of $5 USD and more. By default the action will display all of your sponsors.
required: false
maximum:
description: For example setting this to 1000 will only display sponsors who give of $10 USD and less. By default the action will display all of your sponsors.
required: false
marker:
description: This allows you to modify the marker comment that is placed in your file. By default this is set to sponsors - <!-- sponsors --> <!-- sponsors -->, if you set this to gold for example you can place <!-- gold --> <!-- gold --> in your file.
required: false
template:
description: Allows you to modify the default template. Please refer to the `template` section of this README for more information.
required: false
fallback:
description: Allows you to specify a fallback if you have no sponsors. By default nothing is displayed.
required: false
active-only:
description: If set to false, inactive sponsors will be displayed. This can be useful if you want to display all sponsors, regardless of their status.
default: 'true'
required: false
include-private:
description: If set to true, private sponsors will be displayed in the list, however any identifying information will be redacted. This can be useful if you want to display all sponsors, regardless of their privacy settings.
default: 'false'
required: false
outputs:
sponsorshipStatus:
description: 'The status of the action that indicates if the run failed or passed. Possible outputs include: success|failed'
Action ID: marketplace/yeslayla/zappa-deploy-action
Author: josephbmanley
Publisher: yeslayla
Repository: github.com/yeslayla/zappa-deploy-action
Deploys a Zappa application to AWS
| Name | Required | Description |
|---|---|---|
directory |
Optional | Default: . |
environment |
Required | Name of the preset in `zappa_settings` to use |
name: "Zappa Deploy"
description: "Deploys a Zappa application to AWS"
author: josephbmanley
inputs:
directory:
description: ""
default: "."
environment:
description: 'Name of the preset in `zappa_settings` to use'
required: true
runs:
using: docker
image: Dockerfile
args:
- ${{ inputs.directory }}
- ${{ inputs.environment }}
branding:
icon: package
color: blue
Action ID: marketplace/actions/add-to-project
Author: GitHub
Publisher: actions
Repository: github.com/actions/add-to-project
Automatically add issues and PRs to GitHub projects
| Name | Required | Description |
|---|---|---|
project-url |
Required | URL of the project to add issues to |
github-token |
Required | A GitHub personal access token with write access to the project |
labeled |
Optional | A comma-separated list of labels to use as a filter for issue to be added |
label-operator |
Optional | The behavior of the labels filter, AND to match all labels, OR to match any label, NOT to exclude any listed label (default is OR) |
| Name | Description |
|---|---|
itemId |
The ID of the item that was added to the project |
name: Add To GitHub projects
description: Automatically add issues and PRs to GitHub projects
author: GitHub
branding:
icon: table
color: white
inputs:
project-url:
required: true
description: URL of the project to add issues to
github-token:
required: true
description: A GitHub personal access token with write access to the project
labeled:
required: false
description: A comma-separated list of labels to use as a filter for issue to be added
label-operator:
required: false
description: The behavior of the labels filter, AND to match all labels, OR to match any label, NOT to exclude any listed label (default is OR)
outputs:
itemId:
description: The ID of the item that was added to the project
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/mheap/auto-close-org-issues-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/auto-close-org-issues-action
Automatically close issues that are raised by people that belong to a specific organization
| Name | Required | Description |
|---|---|---|
token |
Required | A GitHub API token that has permission to list org membership |
org |
Required | The org to auto-close issues from |
message |
Required | The comment to add when auto-closing an issue |
keep_open |
Optional | If any of these labels (comma-separated) are applied, the issue will not be auto-closed |
name: Auto Close Org Issues
description: Automatically close issues that are raised by people that belong to a specific organization
runs:
using: docker
image: Dockerfile
branding:
icon: slash
color: orange
inputs:
token:
description: "A GitHub API token that has permission to list org membership"
required: true
org:
description: "The org to auto-close issues from"
required: true
message:
description: "The comment to add when auto-closing an issue"
required: true
keep_open:
description: "If any of these labels (comma-separated) are applied, the issue will not be auto-closed"
required: false
Action ID: marketplace/samuelmeuli/action-snapcraft
Author: Samuel Meuli
Publisher: samuelmeuli
Repository: github.com/samuelmeuli/action-snapcraft
GitHub Action for setting up Snapcraft
| Name | Required | Description |
|---|---|---|
channel |
Optional | The channel to install Snapcraft from Default: stable |
skip_install |
Optional | Skip installation (login only) |
use_lxd |
Optional | Whether to install and configure lxd |
name: Snapcraft Action
author: Samuel Meuli
description: GitHub Action for setting up Snapcraft
inputs:
channel:
description: The channel to install Snapcraft from
required: false
default: stable
skip_install:
description: Skip installation (login only)
required: false
use_lxd:
description: Whether to install and configure lxd
required: false
# tag: deprecate_use_lxd
deprecationMessage: automatic on ubuntu-20.04 or later
runs:
using: node20
main: ./index.js
branding:
icon: upload-cloud
color: green
Action ID: marketplace/azure/app-configuration-import-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/app-configuration-import-action
Import application key-values and feature flags into an Azure App Configuration
| Name | Required | Description |
|---|---|---|
name |
Required | App Configuration name. |
path |
Optional | Configuration file path |
sourceName |
Optional | App Configuration source of importing. Note that importing feature flags from App Service is not supported. |
appServiceAccountName |
Optional | App service account name |
prefix |
Optional | Prefix to be appended to the front of imported keys. |
label |
Optional | Label to be assigned to imported key-values and feature flags. |
name: app-configuration-import
description: Import application key-values and feature flags into an Azure App Configuration
inputs:
name:
description: App Configuration name.
required: true
path:
description: Configuration file path
required: false
sourceName:
description: App Configuration source of importing. Note that importing feature flags from App Service is not supported.
required: false
appServiceAccountName:
description: App service account name
required: false
prefix:
description: Prefix to be appended to the front of imported keys.
required: false
label:
description: Label to be assigned to imported key-values and feature flags.
required: false
runs:
using: "composite"
steps:
- name: Import to App Configuration
shell: bash
run: |
${{github.action_path}}/scripts/validate-inputs.sh "${{inputs.path}}" "${{inputs.sourceName}}" "${{inputs.appServiceAccountName}}"
if [[ ! -z "${{inputs.path}}" ]]; then
${{github.action_path}}/scripts/import-file.sh "${{inputs.name}}" "${{inputs.path}}" "${{inputs.prefix}}" "${{inputs.label}}"
else
echo "Import from source app configuration or app service is currently not supported."
exit 1
fi
Action ID: marketplace/rhysd/rust-cache
Author: Arpad Borsos <swatinem@swatinem.de>
Publisher: rhysd
Repository: github.com/rhysd/rust-cache
A GitHub Action that implements smart caching for rust/cargo projects with sensible defaults.
| Name | Required | Description |
|---|---|---|
prefix-key |
Optional | The prefix cache key, this can be changed to start a new cache manually. Default: v0-rust |
shared-key |
Optional | A cache key that is used instead of the automatic `job`-based key, and is stable over multiple jobs. |
key |
Optional | An additional cache key that is added alongside the automatic `job`-based cache key and can be used to further differentiate jobs. |
env-vars |
Optional | Additional environment variables to include in the cache key, separated by spaces. |
workspaces |
Optional | Paths to multiple Cargo workspaces and their target directories, separated by newlines. |
cache-directories |
Optional | Additional non workspace directories to be cached, separated by newlines. |
cache-targets |
Optional | Determines whether workspace targets are cached. If `false`, only the cargo registry will be cached. Default: true |
cache-on-failure |
Optional | Cache even if the build fails. Defaults to false. |
cache-all-crates |
Optional | Determines which crates are cached. If `true` all crates will be cached, otherwise only dependent crates will be cached. Default: false |
save-if |
Optional | Determiners whether the cache should be saved. If `false`, the cache is only restored. Default: true |
cache-provider |
Optional | Determines which provider to use for caching. Options are github or buildjet, defaults to github. Default: github |
cache-bin |
Optional | Determines whether to cache ${CARGO_HOME}/bin. Default: true |
lookup-only |
Optional | Check if a cache entry exists without downloading the cache Default: false |
| Name | Description |
|---|---|
cache-hit |
A boolean value that indicates an exact match was found. |
name: "Rust Cache"
description: "A GitHub Action that implements smart caching for rust/cargo projects with sensible defaults."
author: "Arpad Borsos <swatinem@swatinem.de>"
inputs:
prefix-key:
description: "The prefix cache key, this can be changed to start a new cache manually."
required: false
default: "v0-rust"
shared-key:
description: "A cache key that is used instead of the automatic `job`-based key, and is stable over multiple jobs."
required: false
key:
description: "An additional cache key that is added alongside the automatic `job`-based cache key and can be used to further differentiate jobs."
required: false
env-vars:
description: "Additional environment variables to include in the cache key, separated by spaces."
required: false
workspaces:
description: "Paths to multiple Cargo workspaces and their target directories, separated by newlines."
required: false
cache-directories:
description: "Additional non workspace directories to be cached, separated by newlines."
required: false
cache-targets:
description: "Determines whether workspace targets are cached. If `false`, only the cargo registry will be cached."
required: false
default: "true"
cache-on-failure:
description: "Cache even if the build fails. Defaults to false."
required: false
cache-all-crates:
description: "Determines which crates are cached. If `true` all crates will be cached, otherwise only dependent crates will be cached."
required: false
default: "false"
save-if:
description: "Determiners whether the cache should be saved. If `false`, the cache is only restored."
required: false
default: "true"
cache-provider:
description: "Determines which provider to use for caching. Options are github or buildjet, defaults to github."
required: false
default: "github"
cache-bin:
description: "Determines whether to cache ${CARGO_HOME}/bin."
required: false
default: "true"
lookup-only:
description: "Check if a cache entry exists without downloading the cache"
required: false
default: "false"
outputs:
cache-hit:
description: "A boolean value that indicates an exact match was found."
runs:
using: "node20"
main: "dist/restore/index.js"
post: "dist/save/index.js"
post-if: "success() || env.CACHE_ON_FAILURE == 'true'"
branding:
icon: "archive"
color: "gray-dark"
Action ID: marketplace/julia-actions/julia-uploadcodecov
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-uploadcodecov
Upload Julia test coverage results to codecov
name: 'Upload Julia codecov results'
description: 'Upload Julia test coverage results to codecov'
author: 'David Anthoff'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/actions/javascript-action
Author: Your name or organization here
Publisher: actions
Repository: github.com/actions/javascript-action
Provide a description here
| Name | Required | Description |
|---|---|---|
milliseconds |
Required | Your input description here Default: 1000 |
| Name | Description |
|---|---|
time |
Your output description here |
name: The name of your action here
description: Provide a description here
author: Your name or organization here
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: heart
color: red
# Define your inputs here.
inputs:
milliseconds:
description: Your input description here
required: true
default: '1000'
# Define your outputs here.
outputs:
time:
description: Your output description here
runs:
using: node24
main: dist/index.js
Action ID: marketplace/renovatebot/github-action
Author: Jeroen de Bruijn
Publisher: renovatebot
Repository: github.com/renovatebot/github-action
GitHub Action to run self-hosted Renovate.
| Name | Required | Description |
|---|---|---|
configurationFile |
Optional | Configuration file to configure Renovate. Either use this input or the 'RENOVATE_CONFIG_FILE' environment variable. |
token |
Optional | GitHub personal access token that Renovate should use. This should be configured using a Secret. Either use this input or the 'RENOVATE_TOKEN' environment variable. |
env-regex |
Optional | Override the environment variables which will be passsed into the renovate container. Defaults to `^(?:RENOVATE_\\w+|LOG_LEVEL|GITHUB_COM_TOKEN|NODE_OPTIONS|(?:HTTPS?|NO)_PROXY|(?:https?|no)_proxy)$` |
renovate-version |
Optional | Renovate version to use.
Default: 42 |
renovate-image |
Optional | Renovate docker image name.
Default: ghcr.io/renovatebot/renovate |
mount-docker-socket |
Optional | Mount the Docker socket inside the renovate container so that the commands can use Docker. Also add the user inside the renovate container to the docker group for socket permissions. |
docker-socket-host-path |
Optional | Allows the overriding of the host path for the Docker socket that is mounted into the container.
Useful on systems where the host Docker socket is located somewhere other than '/var/run/docker.sock' (the default).
Only applicable when 'mount-docker-socket' is true.
Default: /var/run/docker.sock |
docker-cmd-file |
Optional | Override docker command. Default command is `renovate` |
docker-network |
Optional | Docker network. |
docker-user |
Optional | Docker user. Default to an unprivileged user |
docker-volumes |
Optional | Docker volume mounts. Default to /tmp:/tmp
Default: /tmp:/tmp |
name: 'Renovate Bot GitHub Action'
description: 'GitHub Action to run self-hosted Renovate.'
author: 'Jeroen de Bruijn'
branding:
icon: refresh-cw
color: blue
inputs:
configurationFile:
description: |
Configuration file to configure Renovate. Either use this input or the
'RENOVATE_CONFIG_FILE' environment variable.
required: false
token:
description: |
GitHub personal access token that Renovate should use. This should be
configured using a Secret. Either use this input or the 'RENOVATE_TOKEN'
environment variable.
required: false
env-regex:
description: |
Override the environment variables which will be passsed into the renovate container.
Defaults to `^(?:RENOVATE_\\w+|LOG_LEVEL|GITHUB_COM_TOKEN|NODE_OPTIONS|(?:HTTPS?|NO)_PROXY|(?:https?|no)_proxy)$`
required: false
renovate-version:
description: |
Renovate version to use.
required: false
default: '42' # renovate
renovate-image:
description: |
Renovate docker image name.
required: false
default: ghcr.io/renovatebot/renovate
mount-docker-socket:
description: |
Mount the Docker socket inside the renovate container so that the commands
can use Docker. Also add the user inside the renovate container to the
docker group for socket permissions.
required: false
docker-socket-host-path:
description: |
Allows the overriding of the host path for the Docker socket that is mounted into the container.
Useful on systems where the host Docker socket is located somewhere other than '/var/run/docker.sock' (the default).
Only applicable when 'mount-docker-socket' is true.
required: false
default: /var/run/docker.sock
docker-cmd-file:
description: |
Override docker command. Default command is `renovate`
required: false
docker-network:
description: |
Docker network.
required: false
docker-user:
description: |
Docker user. Default to an unprivileged user
required: false
docker-volumes:
description: |
Docker volume mounts. Default to /tmp:/tmp
default: /tmp:/tmp
required: false
runs:
using: node20
main: dist/index.js
Action ID: marketplace/TryGhost/label-actions
Author: Ghost Foundation
Publisher: TryGhost
Repository: github.com/TryGhost/label-actions
Performs triaging actions when issues are opened, labeled or unlabeled
| Name | Required | Description |
|---|---|---|
github-token |
Optional | GitHub access token Default: ${{ github.token }} |
name: 'Label Actions'
description: 'Performs triaging actions when issues are opened, labeled or unlabeled'
author: 'Ghost Foundation'
inputs:
github-token:
description: 'GitHub access token'
default: '${{ github.token }}'
required: false
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/Readme-Workflows/readme-replacer
Author: Readme-Workflows
Publisher: Readme-Workflows
Repository: github.com/Readme-Workflows/readme-replacer
Auto replace content from template file!
| Name | Required | Description |
|---|---|---|
GH_USERNAME |
Optional | Your GitHub username Default: ${{ github.repository_owner }} |
TEMPLATE_FILE |
Optional | Path to template file Default: ./TEMPLATE.md |
COMMIT_FILE |
Optional | Path to commit file Default: ./README.md |
CUSTOM_REPLACER_FILE |
Optional | Path to custom replacer file (JSON). You can select if you want to use eval or not! Default: ./.github/customReplacer.json |
DATE_FORMAT |
Optional | Format of the date/time (if date replacer used) Default: dddd, mmmm dS, yyyy, h:MM:ss TT |
TIMEZONE |
Optional | Desired timezone of the date (can be locale based of GMT offset) Default: 0 |
COMMIT_MESSAGE |
Optional | Message used for committing changes Default: ⚡ Update README by replacing keywords |
COMMIT_EMAIL |
Optional | Email used for committing changes Default: 41898282+github-actions[bot]@users.noreply.github.com |
COMMIT_NAME |
Optional | Name used for committing changes Default: replacer-bot |
name: Readme Replacer - Readme-Workflows
description: Auto replace content from template file!
author: Readme-Workflows
inputs:
GH_USERNAME:
description: "Your GitHub username"
default: ${{ github.repository_owner }}
required: false
TEMPLATE_FILE:
description: "Path to template file"
default: "./TEMPLATE.md"
required: false
COMMIT_FILE:
description: "Path to commit file"
default: "./README.md"
required: false
CUSTOM_REPLACER_FILE:
description: "Path to custom replacer file (JSON). You can select if you want to use eval or not!"
default: "./.github/customReplacer.json"
required: false
DATE_FORMAT:
description: "Format of the date/time (if date replacer used)"
default: "dddd, mmmm dS, yyyy, h:MM:ss TT"
required: false
TIMEZONE:
description: "Desired timezone of the date (can be locale based of GMT offset)"
default: "0"
required: false
COMMIT_MESSAGE:
description: "Message used for committing changes"
default: "⚡ Update README by replacing keywords"
required: false
COMMIT_EMAIL:
description: "Email used for committing changes"
default: "41898282+github-actions[bot]@users.noreply.github.com"
required: false
COMMIT_NAME:
description: "Name used for committing changes"
default: "replacer-bot"
required: false
branding:
color: orange
icon: activity
runs:
using: node12
main: dist/index.js
Action ID: marketplace/mheap/github-action-auto-compile-node
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-auto-compile-node
Automatically compile node GitHub Actions to a single file
| Name | Required | Description |
|---|---|---|
main |
Optional | Entrypoint file of your action to compile. Default: index.js |
name: Auto-Compile Node Actions
description: Automatically compile node GitHub Actions to a single file
inputs:
main:
description: Entrypoint file of your action to compile.
default: index.js
runs:
using: docker
image: Dockerfile
branding:
icon: battery
color: green
Action ID: marketplace/atlassian/gajira-cli
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-cli
Set up Jira CLI
| Name | Required | Description |
|---|---|---|
version |
Optional | Version of the CLI to use Default: 1.0.27 |
# JavaScript action
name: Setup Jira
description: Set up Jira CLI
branding:
icon: 'check-square'
color: 'blue'
inputs:
version:
description: Version of the CLI to use
required: false
default: 1.0.27
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/CodelyTV/bash-github_actions-skeleton
Author: Unknown
Publisher: CodelyTV
Repository: github.com/CodelyTV/bash-github_actions-skeleton
The description of the action
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Required | GitHub token |
another_input |
Required | A demo argument |
name: 'The name of your action'
description: 'The description of the action'
branding:
icon: 'alert-circle'
color: 'gray-dark'
inputs:
GITHUB_TOKEN:
description: 'GitHub token'
required: true
another_input:
description: 'A demo argument'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.GITHUB_TOKEN }}
- ${{ inputs.another_input }}
Action ID: marketplace/ammaraskar/github-problem-matcher
Author: Ammar Askar
Publisher: ammaraskar
Repository: github.com/ammaraskar/github-problem-matcher
Attaches a problem matcher that looks for errors during Sphinx builds
name: Sphinx Problem Matcher
description: Attaches a problem matcher that looks for errors during Sphinx builds
author: Ammar Askar
branding:
icon: book
color: yellow
runs:
using: composite
steps:
- name: Activate the problem matcher
run: echo '::add-matcher::${{ github.action_path }}/sphinx_matcher.json'
shell: sh
Action ID: marketplace/github/licensed-ci
Author: Unknown
Publisher: github
Repository: github.com/github/licensed-ci
Ensure cached github/licensed license data is valid and up to date
| Name | Required | Description |
|---|---|---|
command |
Optional | Licensed command to run Default: licensed |
github_token |
Optional | Access token to push license updates to GitHub Default: ${{ github.token }} |
config_file |
Optional | Path to licensed configuration file Default: .licensed.yml |
user_name |
Optional | Name to use when pushing file changes Default: licensed-ci |
user_email |
Optional | Email to use when pushing file changes Default: licensed-ci@users.noreply.github.com |
commit_message |
Optional | Message to use when committing file changes Default: Auto-update license files |
pr_comment |
Optional | (Deprecated) Comment to add to a pull request, if one exists for the branch |
workflow |
Optional | Which workflow to run when metadata is updated. See README for more details. Default: push |
cleanup_on_success |
Optional | Whether to close open PRs and delete license branches on CI success in user branch. Only used by `branch` workflow Default: false |
branch |
Optional | Branch to run the action on when using `workflow_dispatch` or `schedule` event triggers |
dependabot_skip |
Optional | Whether to add [dependabot skip] to license update commits resulting from Dependabot updates Default: false |
sources |
Optional | Set to a string containing a comma-separated list of github/licensed source name to add `--sources` CLI arguments to cache and status commands. |
format |
Optional | Set to `yaml` or `json` to add the `--format` CLI argument to cache and status commands. |
| Name | Description |
|---|---|
licenses_branch |
The branch containing licensed-ci changes. |
user_branch |
The branch containing user changes. |
licenses_updated |
A boolean string indicating whether license files were updated. |
pr_url |
The html url of the pull request containing license updates, if available. |
pr_number |
The number of the pull request containing license updates, if available. |
pr_created |
True if a pull request was created in a `branch` workflow, false otherwise. |
name: 'Licensed CI'
description: 'Ensure cached github/licensed license data is valid and up to date'
inputs:
command:
description: 'Licensed command to run'
required: false
default: 'licensed'
github_token:
description: 'Access token to push license updates to GitHub'
required: false
default: ${{ github.token }}
config_file:
description: 'Path to licensed configuration file'
required: false
default: '.licensed.yml'
user_name:
description: 'Name to use when pushing file changes'
required: false
default: 'licensed-ci'
user_email:
description: 'Email to use when pushing file changes'
required: false
default: 'licensed-ci@users.noreply.github.com'
commit_message:
description: 'Message to use when committing file changes'
required: false
default: 'Auto-update license files'
pr_comment:
description: '(Deprecated) Comment to add to a pull request, if one exists for the branch'
required: false
workflow:
description: Which workflow to run when metadata is updated. See README for more details.
required: false
default: push
cleanup_on_success:
description: 'Whether to close open PRs and delete license branches on CI success in user branch. Only used by `branch` workflow'
required: false
default: 'false'
branch:
description: 'Branch to run the action on when using `workflow_dispatch` or `schedule` event triggers'
required: false
dependabot_skip:
description: 'Whether to add [dependabot skip] to license update commits resulting from Dependabot updates'
required: false
default: 'false'
sources:
description: 'Set to a string containing a comma-separated list of github/licensed source name to add `--sources` CLI arguments to cache and status commands.'
required: false
format:
description: 'Set to `yaml` or `json` to add the `--format` CLI argument to cache and status commands.'
required: false
outputs:
licenses_branch:
description: The branch containing licensed-ci changes.
user_branch:
description: The branch containing user changes.
licenses_updated:
description: A boolean string indicating whether license files were updated.
pr_url:
description: The html url of the pull request containing license updates, if available.
pr_number:
description: The number of the pull request containing license updates, if available.
pr_created:
description: True if a pull request was created in a `branch` workflow, false otherwise.
branding:
icon: check-circle
color: green
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/posener/goreadme
Author: Unknown
Publisher: posener
Repository: github.com/posener/goreadme
Updates readme from Go doc
| Name | Required | Description |
|---|---|---|
readme-file |
Optional | Name of readme file. Default: README.md |
debug |
Optional | Print Goreadme debug output. Set to any non empty value for true. |
email |
Optional | Email for commit message. Default: posener@gmail.com |
github-token |
Optional | Github token for PR comments. Optional. |
README_FILE |
Optional | An optional alias which can be used instead of 'readme-file'. |
GITHUB_TOKEN |
Optional | An optional alias which can be used instead of 'github-token'. |
import-path |
Optional | Override package import path. |
title |
Optional | Override readme title. Default is package name. |
godoc-url |
Optional | Go Doc URL for GoDoc badge. Default: https://pkg.go.dev |
recursive |
Optional | Load docs recursively. |
render-type-content |
Optional | If 'types' is specified, render full type content. |
constants |
Optional | Write package constants section, and if 'types' is specified, also write per-type constants section. |
variables |
Optional | Write package variables section, and if 'types' is specified, also write per-type variables section. |
functions |
Optional | Write functions section. |
types |
Optional | Write types section. |
factories |
Optional | If 'types' is specified, write section for functions returning each type. |
methods |
Optional | If 'types' is specified, write section for methods for each type. |
skip-examples |
Optional | Skip the examples section. |
skip-sub-packages |
Optional | Skip the sub packages section. |
badge-travisci |
Optional | Show TravisCI badge. |
badge-codecov |
Optional | Show CodeCov badge. |
badge-golangci |
Optional | Show GolangCI badge. |
badge-godoc |
Optional | Show GoDoc badge. |
badge-goreportcard |
Optional | Show GoReportCard badge. |
generated-notice |
Optional | Add generated file notice (visible only in Markdown code). |
credit |
Optional | Add credit line. Default: True |
# File generated by github.com/posener/goaction. DO NOT EDIT.
name: goreadme
description: Updates readme from Go doc
inputs:
readme-file:
default: README.md
description: "Name of readme file."
required: false
debug:
description: "Print Goreadme debug output. Set to any non empty value for true."
required: false
email:
default: posener@gmail.com
description: "Email for commit message."
required: false
github-token:
description: "Github token for PR comments. Optional."
required: false
README_FILE:
description: "An optional alias which can be used instead of 'readme-file'."
required: false
GITHUB_TOKEN:
description: "An optional alias which can be used instead of 'github-token'."
required: false
import-path:
description: "Override package import path."
required: false
title:
description: "Override readme title. Default is package name."
required: false
godoc-url:
default: https://pkg.go.dev
description: "Go Doc URL for GoDoc badge."
required: false
recursive:
default: false
description: "Load docs recursively."
required: false
render-type-content:
default: false
description: "If 'types' is specified, render full type content."
required: false
constants:
default: false
description: "Write package constants section, and if 'types' is specified, also write per-type constants section."
required: false
variables:
default: false
description: "Write package variables section, and if 'types' is specified, also write per-type variables section."
required: false
functions:
default: false
description: "Write functions section."
required: false
types:
default: false
description: "Write types section."
required: false
factories:
default: false
description: "If 'types' is specified, write section for functions returning each type."
required: false
methods:
default: false
description: "If 'types' is specified, write section for methods for each type."
required: false
skip-examples:
default: false
description: "Skip the examples section."
required: false
skip-sub-packages:
default: false
description: "Skip the sub packages section."
required: false
badge-travisci:
default: false
description: "Show TravisCI badge."
required: false
badge-codecov:
default: false
description: "Show CodeCov badge."
required: false
badge-golangci:
default: false
description: "Show GolangCI badge."
required: false
badge-godoc:
default: false
description: "Show GoDoc badge."
required: false
badge-goreportcard:
default: false
description: "Show GoReportCard badge."
required: false
generated-notice:
default: false
description: "Add generated file notice (visible only in Markdown code)."
required: false
credit:
default: true
description: "Add credit line."
required: false
runs:
using: docker
image: Dockerfile
env:
readme-file: "${{ inputs.readme-file }}"
debug: "${{ inputs.debug }}"
email: "${{ inputs.email }}"
github-token: "${{ inputs.github-token }}"
README_FILE: "${{ inputs.README_FILE }}"
GITHUB_TOKEN: "${{ inputs.GITHUB_TOKEN }}"
args:
- "-import-path=${{ inputs.import-path }}"
- "-title=${{ inputs.title }}"
- "-godoc-url=${{ inputs.godoc-url }}"
- "-recursive=${{ inputs.recursive }}"
- "-render-type-content=${{ inputs.render-type-content }}"
- "-constants=${{ inputs.constants }}"
- "-variables=${{ inputs.variables }}"
- "-functions=${{ inputs.functions }}"
- "-types=${{ inputs.types }}"
- "-factories=${{ inputs.factories }}"
- "-methods=${{ inputs.methods }}"
- "-skip-examples=${{ inputs.skip-examples }}"
- "-skip-sub-packages=${{ inputs.skip-sub-packages }}"
- "-badge-travisci=${{ inputs.badge-travisci }}"
- "-badge-codecov=${{ inputs.badge-codecov }}"
- "-badge-golangci=${{ inputs.badge-golangci }}"
- "-badge-godoc=${{ inputs.badge-godoc }}"
- "-badge-goreportcard=${{ inputs.badge-goreportcard }}"
- "-generated-notice=${{ inputs.generated-notice }}"
- "-credit=${{ inputs.credit }}"
branding:
icon: book-open
color: blue
Action ID: marketplace/azure/functions-container-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/functions-container-action
Deploy Functions Container to Azure
| Name | Required | Description |
|---|---|---|
app-name |
Required | Name of the Azure Function App |
image |
Required | Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. |
container-command |
Optional | Enter the start up command. For ex. 'dotnet run' or '/azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost' |
slot-name |
Optional | Function app slot to be deploy to |
| Name | Description |
|---|---|
app-url |
URL to work with your function app |
name: 'Azure Functions Container Action'
description: 'Deploy Functions Container to Azure'
inputs:
app-name:
description: 'Name of the Azure Function App'
required: true
image:
description: "Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'."
required: true
container-command:
description: "Enter the start up command. For ex. 'dotnet run' or '/azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost'"
required: false
slot-name:
description: 'Function app slot to be deploy to'
required: false
outputs:
app-url:
description: 'URL to work with your function app'
branding:
icon: 'container-functionapp.svg'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/aws-actions/stale-issue-cleanup
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/stale-issue-cleanup
Close issues and pull requests with no recent activity
| Name | Required | Description |
|---|---|---|
repo-token |
Required | Token for the repository. Can be passed in using {{ secrets.GITHUB_TOKEN }} |
issue-types |
Optional | Issue types to process ("issues", "pull_requests", or "issues,pull_requests") Default: issues,pull_requests |
stale-issue-message |
Optional | The message to post on the issue when tagging it. If none provided, will not mark issues stale. |
stale-pr-message |
Optional | The message to post on the pr when tagging it. If none provided, will not mark pull requests stale. |
days-before-stale |
Optional | The number of days old an issue can be before marking it stale. Default: 60 |
days-before-close |
Optional | The number of days to wait to close an issue or pull request after it being marked stale. Default: 7 |
stale-issue-label |
Optional | The label to apply when an issue is stale. Default: Stale |
exempt-issue-labels |
Optional | The labels to apply when an issue is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2") |
stale-pr-label |
Optional | The label to apply when a pull request is stale. Default: Stale |
exempt-pr-labels |
Optional | The labels to apply when a pull request is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2") |
ancient-issue-message |
Optional | The message to post when an issue is very old. |
ancient-pr-message |
Optional | The message to post when a pr is very old. |
days-before-ancient |
Optional | The number of days old an issue can be before marking it ancient. Default: 360 |
response-requested-label |
Optional | The label that gets applied when a response is requested. |
closed-for-staleness-label |
Optional | The label that gets applied when an issue is closed for staleness. |
minimum-upvotes-to-exempt |
Optional | The minimum number of "upvotes" that an issue needs to have before not marking as ancient. |
loglevel |
Optional | Set to DEBUG to enable debug logging |
dry-run |
Optional | Set to true to not perform repository changes |
use-created-date-for-ancient |
Optional | Set to true to use issue created date instead of modified date for determining an ancient issue. |
name: "'Stale Issue Cleanup' Action for GitHub Actions"
description: 'Close issues and pull requests with no recent activity'
branding:
icon: 'cloud'
color: 'orange'
inputs:
repo-token:
description: 'Token for the repository. Can be passed in using {{ secrets.GITHUB_TOKEN }}'
required: true
issue-types:
description: 'Issue types to process ("issues", "pull_requests", or "issues,pull_requests")'
default: 'issues,pull_requests'
stale-issue-message:
description: 'The message to post on the issue when tagging it. If none provided, will not mark issues stale.'
stale-pr-message:
description: 'The message to post on the pr when tagging it. If none provided, will not mark pull requests stale.'
days-before-stale:
description: 'The number of days old an issue can be before marking it stale.'
default: 60
days-before-close:
description: 'The number of days to wait to close an issue or pull request after it being marked stale.'
default: 7
stale-issue-label:
description: 'The label to apply when an issue is stale.'
default: 'Stale'
exempt-issue-labels:
description: 'The labels to apply when an issue is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2")'
stale-pr-label:
description: 'The label to apply when a pull request is stale.'
default: 'Stale'
exempt-pr-labels:
description: 'The labels to apply when a pull request is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2")'
ancient-issue-message:
description: 'The message to post when an issue is very old.'
ancient-pr-message:
description: 'The message to post when a pr is very old.'
days-before-ancient:
description: 'The number of days old an issue can be before marking it ancient.'
default: 360
response-requested-label:
description: 'The label that gets applied when a response is requested.'
closed-for-staleness-label:
description: 'The label that gets applied when an issue is closed for staleness.'
minimum-upvotes-to-exempt:
description: 'The minimum number of "upvotes" that an issue needs to have before not marking as ancient.'
loglevel:
description: 'Set to DEBUG to enable debug logging'
dry-run:
description: 'Set to true to not perform repository changes'
use-created-date-for-ancient:
description: 'Set to true to use issue created date instead of modified date for determining an ancient issue.'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/gradle/wrapper-validation-action
Author: Gradle
Publisher: gradle
Repository: github.com/gradle/wrapper-validation-action
Validates Gradle Wrapper JAR Files
| Name | Required | Description |
|---|---|---|
min-wrapper-count |
Optional | Minimum number expected gradle-wrapper.jar files found in the repository. Non-negative number. Higher number is useful in monorepos where each project might have their own wrapper. Default: 1 |
allow-snapshots |
Optional | Allow Gradle snapshot versions during checksum verification. Boolean, true or false. Default: false |
allow-checksums |
Optional | Accept arbitrary user-defined checksums as valid. Comma separated list of SHA256 checksums (lowercase hex). |
| Name | Description |
|---|---|
failed-wrapper |
The path of the Gradle Wrapper(s) JAR that failed validation. Path is a platform-dependent relative path to git repository root. Multiple paths are separated by a | character. |
name: 'Gradle Wrapper Validation'
description: 'Validates Gradle Wrapper JAR Files'
author: 'Gradle'
inputs:
min-wrapper-count:
description: 'Minimum number expected gradle-wrapper.jar files found in the repository. Non-negative number. Higher number is useful in monorepos where each project might have their own wrapper.'
required: false
default: '1'
allow-snapshots:
description: 'Allow Gradle snapshot versions during checksum verification. Boolean, true or false.'
required: false
default: 'false'
allow-checksums:
description: 'Accept arbitrary user-defined checksums as valid. Comma separated list of SHA256 checksums (lowercase hex).'
required: false
default: ''
outputs:
failed-wrapper:
description: 'The path of the Gradle Wrapper(s) JAR that failed validation. Path is a platform-dependent relative path to git repository root. Multiple paths are separated by a | character.'
value: ${{ steps.wrapper-validation.outputs.failed-wrapper }}
runs:
using: "composite"
steps:
- name: Wrapper Validation
id: wrapper-validation
uses: gradle/actions/wrapper-validation@v3.5.0
with:
min-wrapper-count: ${{ inputs.min-wrapper-count }}
allow-snapshots: ${{ inputs.allow-snapshots }}
allow-checksums: ${{ inputs.allow-checksums }}
env:
GRADLE_ACTION_ID: gradle/wrapper-validation-action
branding:
icon: 'shield'
color: gray-dark
Action ID: marketplace/azure/apim-policy-update
Author: Azure
Publisher: azure
Repository: github.com/azure/apim-policy-update
Update Azure API Management policies from Git repository using REST API
| Name | Required | Description |
|---|---|---|
subscription_id |
Required | Azure subscription ID |
resource_group |
Required | Azure resource group name |
apim_name |
Required | Azure API Management service name |
policy_manifest_path |
Optional | Path to policy manifest file (optional) |
| Name | Description |
|---|---|
etag |
ETag of the last updated resource |
name: Azure API Management Policy Update
description:
Update Azure API Management policies from Git repository using REST API
author: Azure
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: cloud
color: blue
# Define your inputs here.
inputs:
subscription_id:
description: Azure subscription ID
required: true
resource_group:
description: Azure resource group name
required: true
apim_name:
description: Azure API Management service name
required: true
policy_manifest_path:
description: Path to policy manifest file (optional)
required: false
default: ''
# Define your outputs here.
outputs:
etag:
description: ETag of the last updated resource
runs:
using: node20
main: dist/index.js
Action ID: marketplace/MarketingPipeline/TOC-Generator-Action
Author: github.com/MarketingPipeline
Publisher: MarketingPipeline
Repository: github.com/MarketingPipeline/TOC-Generator-Action
A GitHub Action to generate a Table of Contents in your README.md
name: 'TOC Generator'
description: 'A GitHub Action to generate a Table of Contents in your README.md'
author: 'github.com/MarketingPipeline'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'activity'
color: 'white'
Action ID: marketplace/amirisback/android-exoplayer-media3
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-exoplayer-media3
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/aws-actions/aws-codebuild-run-build
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/aws-codebuild-run-build
Execute CodeBuild::startBuild for the current repo.
| Name | Required | Description |
|---|---|---|
project-name |
Required | AWS CodeBuild Project Name |
buildspec-override |
Optional | Buildspec Override |
compute-type-override |
Optional | The name of a compute type for this build that overrides the one specified in the build project. |
environment-type-override |
Optional | A container type for this build that overrides the one specified in the build project. |
image-override |
Optional | The name of an image for this build that overrides the one specified in the build project. |
image-pull-credentials-type-override |
Optional | The type of credentials CodeBuild uses to pull images in your build. |
env-vars-for-codebuild |
Optional | Comma separated list of environment variables to send to CodeBuild |
update-interval |
Optional | How often the action calls the API for updates |
update-back-off |
Optional | Base back-off time for the update calls for API if rate-limiting is encountered |
disable-source-override |
Optional | Set to `true` if you want do disable source repo override |
source-version-override |
Optional | The source version that overrides the sourceVersion provided to Codebuild. |
source-type-override |
Optional | The source input type that overrides the source input defined in the build project for this build. Valid values include NO_SOURCE, CODECOMMIT, CODEPIPELINE, GITHUB, S3, BITBUCKET, and GITHUB_ENTERPRISE. |
source-location-override |
Optional | The location that overrides the source location defined in the build project for this build. |
hide-cloudwatch-logs |
Optional | Set to `true` to prevent the CloudWatch logs from streaming the output to GitHub |
disable-github-env-vars |
Optional | Set to `true` if you want do disable github environment variables in codebuild |
artifacts-type-override |
Optional | The type of build output artifact |
stop-on-signals |
Optional | Comma separated list of process signals on which to stop the build. Default is SIGINT. Default: SIGINT |
| Name | Description |
|---|---|
aws-build-id |
The AWS CodeBuild Build ID for this build. |
name: '"AWS CodeBuild run build" Action For GitHub Actions'
description: 'Execute CodeBuild::startBuild for the current repo.'
branding:
icon: 'cloud'
color: 'orange'
inputs:
project-name:
description: 'AWS CodeBuild Project Name'
required: true
buildspec-override:
description: 'Buildspec Override'
required: false
compute-type-override:
description: 'The name of a compute type for this build that overrides the one specified in the build project.'
required: false
environment-type-override:
description: 'A container type for this build that overrides the one specified in the build project.'
required: false
image-override:
description: 'The name of an image for this build that overrides the one specified in the build project.'
required: false
image-pull-credentials-type-override:
description: 'The type of credentials CodeBuild uses to pull images in your build.'
required: false
env-vars-for-codebuild:
description: 'Comma separated list of environment variables to send to CodeBuild'
required: false
update-interval:
description: 'How often the action calls the API for updates'
required: false
update-back-off:
description: 'Base back-off time for the update calls for API if rate-limiting is encountered'
required: false
disable-source-override:
description: 'Set to `true` if you want do disable source repo override'
required: false
source-version-override:
description: 'The source version that overrides the sourceVersion provided to Codebuild.'
required: false
source-type-override:
description: 'The source input type that overrides the source input defined in the build project for this build. Valid values include NO_SOURCE, CODECOMMIT, CODEPIPELINE, GITHUB, S3, BITBUCKET, and GITHUB_ENTERPRISE.'
required: false
source-location-override:
description: 'The location that overrides the source location defined in the build project for this build.'
required: false
hide-cloudwatch-logs:
description: 'Set to `true` to prevent the CloudWatch logs from streaming the output to GitHub'
required: false
disable-github-env-vars:
description: 'Set to `true` if you want do disable github environment variables in codebuild'
required: false
artifacts-type-override:
description: 'The type of build output artifact'
required: false
stop-on-signals:
description: 'Comma separated list of process signals on which to stop the build. Default is SIGINT.'
required: false
default: 'SIGINT'
outputs:
aws-build-id:
description: 'The AWS CodeBuild Build ID for this build.'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/creyD/changelog_action
Author: Conrad Großer <grosserconrad@gmail.com>
Publisher: creyD
Repository: github.com/creyD/changelog_action
Automatically creates a changelog for Pull Requests
name: Changelog Action
description: Automatically creates a changelog for Pull Requests
author: Conrad Großer <grosserconrad@gmail.com>
runs:
using: "composite"
steps:
- name: Prettify code!
shell: bash
run: >-
${{ github.action_path }}/entrypoint.sh
env:
INPUT_HEAD_REF: ${{ github.head_ref }}
INPUT_BASE_REF: ${{ github.base_ref }}
branding:
icon: "activity"
color: "green"
Action ID: marketplace/actions/container-prebuilt-action
Author: Your name or organization here
Publisher: actions
Repository: github.com/actions/container-prebuilt-action
Provide a description here
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Your input description here Default: World |
| Name | Description |
|---|---|
greeting |
Your output description here |
name: The name of your action here
description: Provide a description here
author: Your name or organization here
# Define your inputs here.
inputs:
who-to-greet:
description: Your input description here
required: true
default: World
# Define your outputs here.
outputs:
greeting:
description: Your output description here
# Update the image repository and tag below.
runs:
using: docker
image: docker://ghcr.io/actions/container-prebuilt-action:v0.2.0
env:
INPUT_WHO_TO_GREET: ${{ inputs.who-to-greet }}
Action ID: marketplace/azure/appservice-settings
Author: Unknown
Publisher: azure
Repository: github.com/azure/appservice-settings
Configure Azure Apps with app settings, connection strings and other general configuration settings
| Name | Required | Description |
|---|---|---|
app-name |
Required | Name of the Azure Web App |
slot-name |
Optional | Name of an existing slot other than the production slot. Default value is production |
app-settings-json |
Optional | Application settings using the JSON syntax set as value of secret variable: APP_SETTINGS |
connection-strings-json |
Optional | Connection Strings using the JSON syntax set as value of secret variable: CONNECTION_STRINGS |
general-settings-json |
Optional | General configuration settings using dictionary syntax - Key Value pairs |
mask-inputs |
Optional | Set it to false if you want to provide input jsons as plain text/you do not want input json values to be masked. This will apply to app-settings-json and connection-strings-json. Default is true Default: True |
| Name | Description |
|---|---|
webapp-url |
URL to work with your webapp |
# app service settings action
name: 'Azure App Service Settings'
description: 'Configure Azure Apps with app settings, connection strings and other general configuration settings'
inputs:
app-name: # id of input
description: 'Name of the Azure Web App'
required: true
slot-name: #id of input
description: 'Name of an existing slot other than the production slot. Default value is production'
required: false
app-settings-json: #id of input
description: 'Application settings using the JSON syntax set as value of secret variable: APP_SETTINGS'
required: false
connection-strings-json: #id of input
description: 'Connection Strings using the JSON syntax set as value of secret variable: CONNECTION_STRINGS'
required: false
general-settings-json: #id of input
description: 'General configuration settings using dictionary syntax - Key Value pairs'
required: false
mask-inputs:
description: 'Set it to false if you want to provide input jsons as plain text/you do not want input json values to be masked. This will apply to app-settings-json and connection-strings-json. Default is true'
required: false
default: true
outputs:
webapp-url: # id of output
description: 'URL to work with your webapp'
branding:
icon: 'webapp.svg'
color: 'blue'
runs:
using: 'node16'
main: 'lib/main.js'
Action ID: marketplace/azure/aml-workspace
Author: azure/gh-aml
Publisher: azure
Repository: github.com/azure/aml-workspace
Connect to or create an Azure Machine Learning Workspace with this GitHub Action
| Name | Required | Description |
|---|---|---|
azure_credentials |
Required | Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS |
parameters_file |
Optional | JSON file including the parameters of the workspace. Default: workspace.json |
name: "Azure Machine Learning Workspace"
description: "Connect to or create an Azure Machine Learning Workspace with this GitHub Action"
author: "azure/gh-aml"
inputs:
azure_credentials:
description: "Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS"
required: true
parameters_file:
description: "JSON file including the parameters of the workspace."
required: false
default: "workspace.json"
branding:
icon: chevron-up
color: "blue"
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/peter-evans/ghaction-import-gpg
Author: crazy-max
Publisher: peter-evans
Repository: github.com/peter-evans/ghaction-import-gpg
GitHub Action to easily import a GPG key
| Name | Required | Description |
|---|---|---|
gpg_private_key |
Required | GPG private key exported as an ASCII armored version or its base64 encoding |
passphrase |
Optional | Passphrase of the GPG private key |
trust_level |
Optional | Set key's trust level |
git_config_global |
Optional | Set Git config global Default: false |
git_user_signingkey |
Optional | Set GPG signing keyID for this Git repository Default: false |
git_commit_gpgsign |
Optional | Sign all commits automatically Default: false |
git_tag_gpgsign |
Optional | Sign all tags automatically Default: false |
git_push_gpgsign |
Optional | Sign all pushes automatically Default: if-asked |
git_committer_name |
Optional | Commit author's name |
git_committer_email |
Optional | Commit author's email |
workdir |
Optional | Working directory (below repository root) Default: . |
fingerprint |
Optional | Specific fingerprint to use (subkey) |
| Name | Description |
|---|---|
fingerprint |
Fingerprint of the GPG key (recommended as user ID) |
keyid |
Low 64 bits of the X.509 certificate SHA-1 fingerprint |
name |
Name associated with the GPG key |
email |
Email address associated with the GPG key |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Import GPG'
description: 'GitHub Action to easily import a GPG key'
author: 'crazy-max'
branding:
color: 'yellow'
icon: 'lock'
inputs:
gpg_private_key:
description: 'GPG private key exported as an ASCII armored version or its base64 encoding'
required: true
passphrase:
description: 'Passphrase of the GPG private key'
required: false
trust_level:
description: "Set key's trust level"
required: false
git_config_global:
description: 'Set Git config global'
default: 'false'
required: false
git_user_signingkey:
description: 'Set GPG signing keyID for this Git repository'
default: 'false'
required: false
git_commit_gpgsign:
description: 'Sign all commits automatically'
default: 'false'
required: false
git_tag_gpgsign:
description: 'Sign all tags automatically'
default: 'false'
required: false
git_push_gpgsign:
description: 'Sign all pushes automatically'
default: 'if-asked'
required: false
git_committer_name:
description: 'Commit author''s name'
required: false
git_committer_email:
description: 'Commit author''s email'
required: false
workdir:
description: 'Working directory (below repository root)'
default: '.'
required: false
fingerprint:
description: 'Specific fingerprint to use (subkey)'
required: false
outputs:
fingerprint:
description: 'Fingerprint of the GPG key (recommended as user ID)'
keyid:
description: 'Low 64 bits of the X.509 certificate SHA-1 fingerprint'
name:
description: 'Name associated with the GPG key'
email:
description: 'Email address associated with the GPG key'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/tgymnich/action
Author: Valar
Publisher: tgymnich
Repository: github.com/tgymnich/action
GitHub Action for Valar
| Name | Required | Description |
|---|---|---|
token |
Required | Valar token |
workdir |
Optional | Path where valar push will be run |
name: 'valar-action'
description: 'GitHub Action for Valar'
author: 'Valar'
branding:
icon: 'upload-cloud'
color: 'red'
inputs:
token:
description: 'Valar token'
required: true
workdir:
description: 'Path where valar push will be run'
required: false
runs:
using: 'node16'
main: 'lib/main.js'
Action ID: marketplace/mszostok/kind-action
Author: The Helm authors
Publisher: mszostok
Repository: github.com/mszostok/kind-action
Create a kind (Kubernetes IN Docker) cluster
| Name | Required | Description |
|---|---|---|
version |
Optional | The kind version to use (default: v0.29.0) Default: v0.29.0 |
config |
Optional | The path to the kind config file |
kubeconfig |
Optional | The path to the kubeconfig config file |
node_image |
Optional | The Docker image for the cluster nodes |
cluster_name |
Optional | The name of the cluster to create (default: chart-testing) Default: chart-testing |
wait |
Optional | The duration to wait for the control plane to become ready (default: 60s) Default: 60s |
verbosity |
Optional | The verbosity level for kind (default: 0) Default: 0 |
kubectl_version |
Optional | The kubectl version to use (default: v1.31.11) Default: v1.31.11 |
registry |
Optional | Whether to configure an insecure local registry (default: false) Default: false |
registry_image |
Optional | The registry image to use (default: registry:2) Default: registry:2 |
registry_name |
Optional | The registry name to use (default: kind-registry) Default: kind-registry |
registry_port |
Optional | The local port used to bind the registry (default: 5000) Default: 5000 |
registry_enable_delete |
Optional | Enable delete operations on the registry (default: false) Default: false |
install_only |
Optional | Skips cluster creation, only install kind (default: false) |
ignore_failed_clean |
Optional | Whether to ignore the post-delete the cluster (default: false) Default: false |
cloud_provider |
Optional | Whether to use cloud provider loadbalancer (default: false) Default: false |
name: "Kind Cluster"
description: "Create a kind (Kubernetes IN Docker) cluster"
author: "The Helm authors"
branding:
color: blue
icon: box
inputs:
version:
description: "The kind version to use (default: v0.29.0)"
required: false
default: "v0.29.0"
config:
description: "The path to the kind config file"
required: false
kubeconfig:
description: "The path to the kubeconfig config file"
required: false
node_image:
description: "The Docker image for the cluster nodes"
required: false
cluster_name:
description: "The name of the cluster to create (default: chart-testing)"
required: false
default: "chart-testing"
wait:
description: "The duration to wait for the control plane to become ready (default: 60s)"
required: false
default: "60s"
verbosity:
description: "The verbosity level for kind (default: 0)"
default: "0"
required: false
kubectl_version:
description: "The kubectl version to use (default: v1.31.11)"
required: false
default: "v1.31.11"
registry:
description: "Whether to configure an insecure local registry (default: false)"
required: false
default: "false"
registry_image:
description: "The registry image to use (default: registry:2)"
required: false
default: "registry:2"
registry_name:
description: "The registry name to use (default: kind-registry)"
required: false
default: "kind-registry"
registry_port:
description: "The local port used to bind the registry (default: 5000)"
required: false
default: "5000"
registry_enable_delete:
description: "Enable delete operations on the registry (default: false)"
required: false
default: "false"
install_only:
description: "Skips cluster creation, only install kind (default: false)"
required: false
ignore_failed_clean:
description: "Whether to ignore the post-delete the cluster (default: false)"
default: "false"
required: false
cloud_provider:
description: "Whether to use cloud provider loadbalancer (default: false)"
required: false
default: "false"
runs:
using: "node20"
main: "main.js"
post: "cleanup.js"
Action ID: marketplace/upsidr/comvent
Author: rytswd
Publisher: upsidr
Repository: github.com/upsidr/comvent
Comment Event ⚡️ handler, which can be used as a building block of ChatBot like setup.
| Name | Required | Description |
|---|---|---|
token |
Required | Your GitHub Token, often specified with GITHUB_TOKEN |
config-path |
Optional | The path to keyword setup Default: .github/comvent-setup.yaml |
config-check-only |
Optional | A flag to skip comment check so that you can check config validity only |
name: Comvent - Comment Event
description: Comment Event ⚡️ handler, which can be used as a building block of ChatBot like setup.
author: rytswd
branding:
icon: zap
color: green
inputs:
token:
required: true
description: Your GitHub Token, often specified with GITHUB_TOKEN
config-path:
description: The path to keyword setup
required: false
default: .github/comvent-setup.yaml
config-check-only:
description: A flag to skip comment check so that you can check config validity only
required: false
default: ""
runs:
using: node16
main: dist/index.js
Action ID: marketplace/federicocarboni/setup-ffmpeg
Author: Federico Carboni
Publisher: federicocarboni
Repository: github.com/federicocarboni/setup-ffmpeg
Download, cache and add to PATH ffmpeg and ffprobe binaries.
| Name | Required | Description |
|---|---|---|
ffmpeg-version |
Optional | Version of ffmpeg to use. Version Spec is only fully supported on Windows. Default: release |
architecture |
Optional | Target architecture for FFmpeg, on Windows and MacOS only x64 is supported; on Linux arm64 may also be used. Defaults to the system architecture. |
linking-type |
Optional | Linking type of the binaries. Use "shared" to download shared binaries and "static" for statically linked ones. Shared builds are currently only available for windows releases. Defaults to "static" Default: static |
github-token |
Optional | Used to pull Windows builds from GyanD/codexffmpeg. Since there's a default, this is typically not supplied by the user. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
| Name | Description |
|---|---|
ffmpeg-version |
The installed ffmpeg version |
ffmpeg-path |
Path to the ffmpeg binaries |
cache-hit |
A boolean value to indicate whether the tool cache was hit |
name: Setup FFmpeg
description: 'Download, cache and add to PATH ffmpeg and ffprobe binaries.'
author: Federico Carboni
inputs:
ffmpeg-version:
description: 'Version of ffmpeg to use. Version Spec is only fully supported on Windows.'
default: release
architecture:
description: 'Target architecture for FFmpeg, on Windows and MacOS only x64 is supported; on Linux arm64 may also be used. Defaults to the system architecture.'
linking-type:
description: Linking type of the binaries. Use "shared" to download shared binaries and "static" for statically linked ones. Shared builds are currently only available for windows releases. Defaults to "static"
default: static
github-token:
description: "Used to pull Windows builds from GyanD/codexffmpeg. Since there's a default, this is typically not supplied by the user."
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
outputs:
ffmpeg-version:
description: The installed ffmpeg version
ffmpeg-path:
description: Path to the ffmpeg binaries
cache-hit:
description: A boolean value to indicate whether the tool cache was hit
runs:
using: node20
main: dist/index.js
Action ID: marketplace/peter-evans/rust-wasm-action
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/rust-wasm-action
Hello World from Rust-generated WebAssembly!
| Name | Required | Description |
|---|---|---|
actor |
Optional | The user that triggered the workflow Default: ${{ github.actor }} |
| Name | Description |
|---|---|
result |
The result of the action execution |
name: 'Hello World'
description: 'Hello World from Rust-generated WebAssembly!'
inputs:
actor:
description: 'The user that triggered the workflow'
default: ${{ github.actor }}
outputs:
result:
description: 'The result of the action execution'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/aws-actions/aws-devicefarm-mobile-device-testing
Author: AWS Device Farm
Publisher: aws-actions
Repository: github.com/aws-actions/aws-devicefarm-mobile-device-testing
GitHub action for automated mobile device testing on AWS Device Farm
| Name | Required | Description |
|---|---|---|
run-settings-json |
Required | The Run Settings as a json string. The schema for the contents of this file can be found here: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-device-farm/Interface/ScheduleRunCommandInput/ The following fields support supplying an ARN or a Name. - projectArn - devicePoolArn - networkProfileArn - vpceConfigurationArns The following fields support supplying an ARN or a path to the file to be found within the repository: - appArn - testPackageArn - testSpecArn - extraDataPackageArn NOTE: If the file specified is not found in the repo the existing Project Uploads in Device Farm will be searched for one with matching type and name. |
artifact-types |
Optional | (Optional) A comma delimited list of Device Farm Artifacts that should be downloaded after the jobs completes. https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-device-farm/Interface/ListArtifactsCommandOutput/ Note: To download all artifact types set the value to ALL. To download none skip this input. They will be downloaded to a folder. The name of the folder can be found by referencing the output with name artifact-folder. Please use the GitHub Action [upload-artifact](https://github.com/actions/upload-artifact) to store them. |
upload-poll-interval |
Optional | (Optional) The duration (in milliseconds) between successive polls for the status of the file upload. Default: 1000 |
run-poll-interval |
Optional | (Optional) The duration (in milliseconds) between successive polls for the status of the test run. Default: 30000 |
| Name | Description |
|---|---|
arn |
The ARN of the AWS Device Farm Automated Test Run |
status |
The status of the Automated Test Run |
result |
The result of the Automated Test Run |
artifact-folder |
The name of the folder that the test artifacts are downloaded into |
console-url |
The AWS Console URL for the test run |
name: 'AWS Device Farm Mobile Device Testing GitHub Action'
author: 'AWS Device Farm'
description: 'GitHub action for automated mobile device testing on AWS Device Farm'
branding:
icon: 'cloud'
color: 'orange'
inputs:
run-settings-json:
description: >-
The Run Settings as a json string.
The schema for the contents of this file can be found here:
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-device-farm/Interface/ScheduleRunCommandInput/
The following fields support supplying an ARN or a Name.
- projectArn
- devicePoolArn
- networkProfileArn
- vpceConfigurationArns
The following fields support supplying an ARN or a path to the file to be found within the repository:
- appArn
- testPackageArn
- testSpecArn
- extraDataPackageArn
NOTE: If the file specified is not found in the repo the existing Project Uploads in Device Farm will be searched for one with matching type and name.
required: true
artifact-types:
description: >-
(Optional) A comma delimited list of Device Farm Artifacts that should be downloaded after the jobs completes.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-device-farm/Interface/ListArtifactsCommandOutput/
Note: To download all artifact types set the value to ALL. To download none skip this input.
They will be downloaded to a folder. The name of the folder can be found by referencing the output with name artifact-folder. Please use the GitHub Action [upload-artifact](https://github.com/actions/upload-artifact) to store them.
required: false
default: ''
upload-poll-interval:
description: >-
(Optional) The duration (in milliseconds) between successive polls for the status of the file upload.
required: false
default: '1000'
run-poll-interval:
description: >-
(Optional) The duration (in milliseconds) between successive polls for the status of the test run.
required: false
default: '30000'
outputs:
arn:
description: 'The ARN of the AWS Device Farm Automated Test Run'
status:
description: 'The status of the Automated Test Run'
result:
description: 'The result of the Automated Test Run'
artifact-folder:
description: 'The name of the folder that the test artifacts are downloaded into'
console-url:
description: 'The AWS Console URL for the test run'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/cleanup/index.js'
Action ID: marketplace/wlixcc/SFTP-Deploy-Action
Author: Unknown
Publisher: wlixcc
Repository: github.com/wlixcc/SFTP-Deploy-Action
Deploy file to your server use sftp & ssh private key
| Name | Required | Description |
|---|---|---|
username |
Required | username |
server |
Required | your sftp server |
port |
Required | your sftp server port, default to 22 Default: 22 |
ssh_private_key |
Required | you can copy private_key from your *.pem file, keep format |
local_path |
Required | will put all file under this path Default: ./* |
remote_path |
Required | files will copy to under remote_path |
sftp_only |
Optional | connection via sftp protocol only |
sftpArgs |
Optional | sftp args |
delete_remote_files |
Optional | This operation will delete all files in the remote path before upload. Please be careful set this to true |
password |
Optional | SSH passsword,If a password is set, the secret key pair is ignored |
rsyncArgs |
Optional | Additional arguments for the rsync command. You can use this parameter to customize the rsync behavior, such as excluding files or directories. Example: '--exclude=node_modules --exclude=.git --exclude=*.log'. If set, these arguments will be passed directly to the rsync command. |
ssh_passphrase |
Optional | Passphrase for ssh encrypted private-key. If the private-key is not encrypted, this parameter is not required. |
# action.yml
name: 'SFTP Deploy'
description: 'Deploy file to your server use sftp & ssh private key'
inputs:
username:
description: 'username'
required: true
server:
description: 'your sftp server'
required: true
port:
description: 'your sftp server port, default to 22'
required: true
default: "22"
ssh_private_key:
description: 'you can copy private_key from your *.pem file, keep format'
required: true
local_path:
description: 'will put all file under this path'
required: true
default: ./*
remote_path:
description: 'files will copy to under remote_path'
required: true
sftp_only:
description: 'connection via sftp protocol only'
required: false
default: false
sftpArgs:
description: 'sftp args'
required: false
delete_remote_files:
description: 'This operation will delete all files in the remote path before upload. Please be careful set this to true'
required: false
default: false
password:
description: "SSH passsword,If a password is set, the secret key pair is ignored"
required: false
rsyncArgs:
description: "Additional arguments for the rsync command.
You can use this parameter to customize the rsync behavior, such as excluding files or directories.
Example: '--exclude=node_modules --exclude=.git --exclude=*.log'.
If set, these arguments will be passed directly to the rsync command."
required: false
default: ""
ssh_passphrase:
description: "Passphrase for ssh encrypted private-key. If the private-key is not encrypted, this parameter is not required."
required: false
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.username }}
- ${{ inputs.server }}
- ${{ inputs.port }}
- ${{ inputs.ssh_private_key }}
- ${{ inputs.local_path }}
- ${{ inputs.remote_path }}
- ${{ inputs.sftp_only }}
- ${{ inputs.sftpArgs }}
- ${{ inputs.delete_remote_files }}
- ${{ inputs.password }}
- ${{ inputs.rsyncArgs }}
- ${{ inputs.ssh_passphrase }}
branding:
icon: 'upload-cloud'
color: 'purple'
Action ID: marketplace/irgolic/AutoPR
Author: Unknown
Publisher: irgolic
Repository: github.com/irgolic/AutoPR
Fix issues with AI-generated pull requests, powered by ChatGPT GPT-4 (AutoPR)
| Name | Required | Description |
|---|---|---|
github_token |
Required | GitHub token |
base_branch |
Optional | Base branch Default: main |
loading_gif_url |
Optional | URL of the gif to display while the PR is being generated Default: https://media0.giphy.com/media/l3nWhI38IWDofyDrW/giphy.gif |
model |
Optional | Name of the OpenAI chat model Default: gpt-4 |
context_limit |
Optional | Maximum size of the context window to use, varies depending on the model and preference Default: 8192 |
min_tokens |
Optional | Minimum number of tokens to be made available for generation Default: 1000 |
max_tokens |
Optional | Maximum number of tokens to generate Default: 2000 |
num_reasks |
Optional | Number of times to re-ask the model in file exploration and commit generation Default: 2 |
agent_id |
Optional | ID of the brain to use Default: plan_and_code |
agent_config |
Optional | Configuration for the coordinating agent in yaml format |
target_branch_name_template |
Optional | Template for the name of the target branch Default: autopr/{issue_number} |
temperature |
Optional | Temperature for the model Default: 0.9 |
rail_temperture |
Optional | Temperature for the guardrails calls Default: 0.9 |
overwrite_existing |
Optional | Whether to overwrite existing branches and pull requests when creating from issues Default: false |
name: 'Automatic Pull Request'
description: 'Fix issues with AI-generated pull requests, powered by ChatGPT GPT-4 (AutoPR)'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'git-pull-request'
color: 'purple'
inputs:
github_token:
description: 'GitHub token'
required: true
base_branch:
description: 'Base branch'
default: 'main'
loading_gif_url:
description: 'URL of the gif to display while the PR is being generated'
default: 'https://media0.giphy.com/media/l3nWhI38IWDofyDrW/giphy.gif'
model:
description: 'Name of the OpenAI chat model'
default: 'gpt-4'
context_limit:
description: 'Maximum size of the context window to use, varies depending on the model and preference'
default: '8192'
min_tokens:
description: 'Minimum number of tokens to be made available for generation'
default: '1000'
max_tokens:
description: 'Maximum number of tokens to generate'
default: '2000'
num_reasks:
description: 'Number of times to re-ask the model in file exploration and commit generation'
default: '2'
agent_id:
description: 'ID of the brain to use'
default: 'plan_and_code'
agent_config:
description: 'Configuration for the coordinating agent in yaml format'
default: ''
target_branch_name_template:
description: 'Template for the name of the target branch'
default: 'autopr/{issue_number}'
temperature:
description: 'Temperature for the model'
default: '0.9'
rail_temperture:
description: 'Temperature for the guardrails calls'
default: '0.9'
overwrite_existing:
description: 'Whether to overwrite existing branches and pull requests when creating from issues'
default: 'false'
Action ID: marketplace/posener/goaction-issues-example
Author: Unknown
Publisher: posener
Repository: github.com/posener/goaction-issues-example
An example of using goaction with Github APIs.
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Required | A token for Github APIs. |
# File generated by github.com/posener/goaction. DO NOT EDIT.
name: goaction-issues-example
description: "An example of using goaction with Github APIs."
inputs:
GITHUB_TOKEN:
description: "A token for Github APIs."
required: true
runs:
using: docker
image: Dockerfile
env:
GITHUB_TOKEN: "${{ inputs.GITHUB_TOKEN }}"
Action ID: marketplace/CodelyTV/pr-size-labeler
Author: Unknown
Publisher: CodelyTV
Repository: github.com/CodelyTV/pr-size-labeler
Label a PR based on the amount of changes
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Optional | GitHub token needed to interact with the repository Default: ${{ github.token }} |
xs_label |
Optional | Label for xs PR Default: size/xs |
xs_max_size |
Optional | Max size for a PR to be considered xs Default: 10 |
s_label |
Optional | Label for s PR Default: size/s |
s_max_size |
Optional | Max size for a PR to be considered s Default: 100 |
m_label |
Optional | Label for m PR Default: size/m |
m_max_size |
Optional | Max size for a PR to be considered m Default: 500 |
l_label |
Optional | Label for l PR Default: size/l |
l_max_size |
Optional | Max size for a PR to be considered l Default: 1000 |
xl_label |
Optional | Label for xl PR Default: size/xl |
fail_if_xl |
Optional | Report GitHub Workflow failure if the PR size is xl allowing to forbid PR merge Default: false |
message_if_xl |
Optional | Message to show if the PR size is xl Default: This PR exceeds the recommended size of 1000 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size.
|
github_api_url |
Optional | URL to the API of your Github Server, only necessary for Github Enterprise customers Default: ${{ github.api_url }} |
files_to_ignore |
Optional | Whitespace separated list of files to ignore when calculating the PR size (sum of changes) |
ignore_line_deletions |
Optional | Whether to ignore lines which are deleted when calculating the PR size. If set to "true", deleted lines will be ignored. Default: false |
ignore_file_deletions |
Optional | Whether to ignore files which are deleted when calculating the PR size. If set to "true", deleted files will be ignored. Default: false |
name: 'Pull Request size labeler'
description: 'Label a PR based on the amount of changes'
inputs:
GITHUB_TOKEN:
description: 'GitHub token needed to interact with the repository'
required: false
default: ${{ github.token }}
xs_label:
description: 'Label for xs PR'
required: false
default: 'size/xs'
xs_max_size:
description: 'Max size for a PR to be considered xs'
required: false
default: '10'
s_label:
description: 'Label for s PR'
required: false
default: 'size/s'
s_max_size:
description: 'Max size for a PR to be considered s'
required: false
default: '100'
m_label:
description: 'Label for m PR'
required: false
default: 'size/m'
m_max_size:
description: 'Max size for a PR to be considered m'
required: false
default: '500'
l_label:
description: 'Label for l PR'
required: false
default: 'size/l'
l_max_size:
description: 'Max size for a PR to be considered l'
required: false
default: '1000'
xl_label:
description: 'Label for xl PR'
required: false
default: 'size/xl'
fail_if_xl:
description: 'Report GitHub Workflow failure if the PR size is xl allowing to forbid PR merge'
required: false
default: 'false'
message_if_xl:
description: 'Message to show if the PR size is xl'
required: false
default: >
This PR exceeds the recommended size of 1000 lines.
Please make sure you are NOT addressing multiple issues with one PR.
Note this PR might be rejected due to its size.
github_api_url:
description: 'URL to the API of your Github Server, only necessary for Github Enterprise customers'
required: false
default: '${{ github.api_url }}'
files_to_ignore:
description: 'Whitespace separated list of files to ignore when calculating the PR size (sum of changes)'
required: false
default: ''
ignore_line_deletions:
description: 'Whether to ignore lines which are deleted when calculating the PR size. If set to "true", deleted lines will be ignored.'
required: false
default: 'false'
ignore_file_deletions:
description: 'Whether to ignore files which are deleted when calculating the PR size. If set to "true", deleted files will be ignored.'
required: false
default: 'false'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- --github_token=${{ inputs.GITHUB_TOKEN }}
- --github_api_url=${{ inputs.github_api_url }}
- --xs_label=${{ inputs.xs_label }}
- --xs_max_size=${{ inputs.xs_max_size }}
- --s_label=${{ inputs.s_label }}
- --s_max_size=${{ inputs.s_max_size }}
- --m_label=${{ inputs.m_label }}
- --m_max_size=${{ inputs.m_max_size }}
- --l_label=${{ inputs.l_label }}
- --l_max_size=${{ inputs.l_max_size }}
- --xl_label=${{ inputs.xl_label }}
- --fail_if_xl=${{ inputs.fail_if_xl }}
- --message_if_xl="${{ inputs.message_if_xl }}"
- --files_to_ignore=${{ inputs.files_to_ignore }}
- --ignore_line_deletions=${{ inputs.ignore_line_deletions }}
- --ignore_file_deletions=${{ inputs.ignore_file_deletions }}
branding:
icon: 'tag'
color: 'green'
Action ID: marketplace/azure/ARO-HCP
Author: Unknown
Publisher: azure
Repository: github.com/azure/ARO-HCP
installs the azure-cli at a given version
| Name | Required | Description |
|---|---|---|
version |
Optional | Azure CLI Version to install Default: latest |
# Custom install of the azure-cli
# Note that the azure-cli ships out of the box with ubuntu-latest for
# github actions runs. This allows us to pin a version and also allows us
# to use commands that aren't present in the azure/cli github action that's
# published to the GitHub actions marketplace.
# Follows instructions: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux
name: 'Install Azure CLI'
description: 'installs the azure-cli at a given version'
inputs:
version:
description: 'Azure CLI Version to install'
required: false
default: "latest"
runs:
using: "composite"
steps:
- name: 'install azure-cli'
shell: bash
env:
AZ_VER: ${{ inputs.version }}
run: |
if [[ "${AZ_VER}" == "latest" ]]; then
# If AZ_VER == latest, then don't bother installing a specific version of az
exit 0
fi
sudo mkdir -p /etc/apt/keyrings
curl -sLS https://packages.microsoft.com/keys/microsoft.asc |
gpg --dearmor | sudo tee /etc/apt/keyrings/microsoft.gpg > /dev/null
sudo chmod go+r /etc/apt/keyrings/microsoft.gpg
AZ_DIST=$(lsb_release -cs)
echo "Types: deb
URIs: https://packages.microsoft.com/repos/azure-cli/
Suites: ${AZ_DIST}
Components: main
Architectures: $(dpkg --print-architecture)
Signed-by: /etc/apt/keyrings/microsoft.gpg" | sudo tee /etc/apt/sources.list.d/azure-cli.sources
sudo apt-get update
# Obtain the currently installed distribution
AZ_DIST=$(lsb_release -cs)
# Install a specific version
sudo apt-get install azure-cli=${AZ_VER}-1~${AZ_DIST} --allow-downgrades
Action ID: marketplace/github/ai-assessment-comment-labeler
Author: Unknown
Publisher: github
Repository: github.com/github/ai-assessment-comment-labeler
Generate an AI comment based on a prompt file and labels.
| Name | Required | Description |
|---|---|---|
token |
Required | The token to use |
ai_review_label |
Required | The label applied to the issue to trigger AI review |
issue_number |
Required | The issue number to comment on |
issue_body |
Required | The body of the issue to comment on |
prompts_directory |
Required | The path to the prompts directory where the .prompt.yml files are located |
labels_to_prompts_mapping |
Required | A mapping of labels to prompt files, separated by '|'. Format: 'label1,prompt1.prompt.yml|label2,prompt2.prompt.yml' |
model |
Optional | The model to use for AI generation. Will be inferred from the .prompt.yml file if not provided. Action will fail if not found in the file and not provided. |
endpoint |
Optional | The endpoint to use. Defaults to the OpenAI API endpoint "https://models.github.ai/inference" Default: https://models.github.ai/inference |
max_tokens |
Optional | The maximum number of tokens to generate. Will be inferred from the .prompt.yml file if not provided. Defaults to 200 if not found in the file. |
repo_name |
Optional | The name of the repository. Will be inferred from the GitHub context if not provided. |
owner |
Optional | The owner of the repository. Will be inferred from the GitHub context if not provided. |
assessment_regex_pattern |
Optional | Regex pattern for capturing the assessment line in the AI response used for creating the label to add to the issue. Default: "^###.*[aA]ssessment:\s*(.+)$" Default: ^###.*[aA]ssessment:\s*(.+)$ |
assessment_regex_flags |
Optional | Regex flags for the assessment regex pattern. e.g.: "i" for case-insensitive matching. |
no_comment_regex_pattern |
Optional | Regex pattern for capturing the no comment directive in the AI response. e.g.: "<!--.*no.*comment.*-->" |
no_comment_regex_flags |
Optional | Regex flags for the no comment regex pattern. e.g.: "i" for case-insensitive matching. |
suppress_labels |
Optional | Do not add any labels to the issue. Useful if you just want to get the AI assessments and labels as outputs. Default: false |
suppress_comments |
Optional | Do not add any comments to the issue. Useful if you just want to get the AI assessments and labels as outputs. Default: false |
| Name | Description |
|---|---|
ai_assessments |
JSON array of objects representing all assessments made by the AI e.g. "[{"prompt": "security.prompt.yml", "assessmentLabel": "ai:security:high risk", "response": "### Assessment: High Risk\nThe code contains..."}]" |
name: 'AI Assessment Comment Labeler'
description: Generate an AI comment based on a prompt file and labels.
branding:
icon: 'message-square'
color: orange
inputs:
token:
description: The token to use
required: true
ai_review_label:
description: The label applied to the issue to trigger AI review
required: true
issue_number:
description: The issue number to comment on
required: true
issue_body:
description: The body of the issue to comment on
required: true
prompts_directory:
description: The path to the prompts directory where the .prompt.yml files are located
required: true
labels_to_prompts_mapping:
description: "A mapping of labels to prompt files, separated by '|'. Format: 'label1,prompt1.prompt.yml|label2,prompt2.prompt.yml'"
required: true
model:
description: The model to use for AI generation. Will be inferred from the .prompt.yml file if not provided. Action will fail if not found in the file and not provided.
required: false
endpoint:
description: 'The endpoint to use. Defaults to the OpenAI API endpoint "https://models.github.ai/inference"'
required: false
default: 'https://models.github.ai/inference'
max_tokens:
description: 'The maximum number of tokens to generate. Will be inferred from the .prompt.yml file if not provided. Defaults to 200 if not found in the file.'
required: false
repo_name:
description: The name of the repository. Will be inferred from the GitHub context if not provided.
required: false
owner:
description: The owner of the repository. Will be inferred from the GitHub context if not provided.
required: false
assessment_regex_pattern:
description: 'Regex pattern for capturing the assessment line in the AI response used for creating the label to add to the issue. Default: "^###.*[aA]ssessment:\s*(.+)$"'
required: false
default: '^###.*[aA]ssessment:\s*(.+)$'
assessment_regex_flags:
description: 'Regex flags for the assessment regex pattern. e.g.: "i" for case-insensitive matching.'
required: false
default: ''
no_comment_regex_pattern:
description: 'Regex pattern for capturing the no comment directive in the AI response. e.g.: "<!--.*no.*comment.*-->"'
required: false
default: ''
no_comment_regex_flags:
description: 'Regex flags for the no comment regex pattern. e.g.: "i" for case-insensitive matching.'
required: false
default: ''
suppress_labels:
description: 'Do not add any labels to the issue. Useful if you just want to get the AI assessments and labels as outputs.'
required: false
default: 'false'
suppress_comments:
description: 'Do not add any comments to the issue. Useful if you just want to get the AI assessments and labels as outputs.'
required: false
default: 'false'
outputs:
ai_assessments:
description: 'JSON array of objects representing all assessments made by the AI e.g. "[{"prompt": "security.prompt.yml", "assessmentLabel": "ai:security:high risk", "response": "### Assessment: High Risk\nThe code contains..."}]"'
runs:
using: node20
main: dist/index.js
Action ID: marketplace/actions/setup-dotnet
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-dotnet
Used to build and publish .NET source. Set up a specific version of the .NET and authentication to private NuGet repository
| Name | Required | Description |
|---|---|---|
dotnet-version |
Optional | Optional SDK version(s) to use. If not provided, will install global.json version when available. Examples: 2.2.104, 3.1, 3.1.x, 3.x, 6.0.2xx |
dotnet-quality |
Optional | Optional quality of the build. The possible values are: daily, signed, validated, preview, ga. |
global-json-file |
Optional | Optional global.json location, if your global.json isn't located in the root of the repo. |
source-url |
Optional | Optional package source for which to set up authentication. Will consult any existing NuGet.config in the root of the repo and provide a temporary NuGet.config using the NUGET_AUTH_TOKEN environment variable as a ClearTextPassword |
owner |
Optional | Optional OWNER for using packages from GitHub Package Registry organizations/users other than the current repository's owner. Only used if a GPR URL is also provided in source-url |
config-file |
Optional | Optional NuGet.config location, if your NuGet.config isn't located in the root of the repo. |
cache |
Optional | Optional input to enable caching of the NuGet global-packages folder |
cache-dependency-path |
Optional | Used to specify the path to a dependency file: packages.lock.json. Supports wildcards or a list of file names for caching multiple dependencies. |
| Name | Description |
|---|---|
cache-hit |
A boolean value to indicate if a cache was hit. |
dotnet-version |
Contains the installed by action .NET SDK version for reuse. |
name: 'Setup .NET Core SDK'
description: 'Used to build and publish .NET source. Set up a specific version of the .NET and authentication to private NuGet repository'
author: 'GitHub'
branding:
icon: play
color: green
inputs:
dotnet-version:
description: 'Optional SDK version(s) to use. If not provided, will install global.json version when available. Examples: 2.2.104, 3.1, 3.1.x, 3.x, 6.0.2xx'
dotnet-quality:
description: 'Optional quality of the build. The possible values are: daily, signed, validated, preview, ga.'
global-json-file:
description: 'Optional global.json location, if your global.json isn''t located in the root of the repo.'
source-url:
description: 'Optional package source for which to set up authentication. Will consult any existing NuGet.config in the root of the repo and provide a temporary NuGet.config using the NUGET_AUTH_TOKEN environment variable as a ClearTextPassword'
owner:
description: 'Optional OWNER for using packages from GitHub Package Registry organizations/users other than the current repository''s owner. Only used if a GPR URL is also provided in source-url'
config-file:
description: 'Optional NuGet.config location, if your NuGet.config isn''t located in the root of the repo.'
cache:
description: 'Optional input to enable caching of the NuGet global-packages folder'
required: false
default: false
cache-dependency-path:
description: 'Used to specify the path to a dependency file: packages.lock.json. Supports wildcards or a list of file names for caching multiple dependencies.'
required: false
outputs:
cache-hit:
description: 'A boolean value to indicate if a cache was hit.'
dotnet-version:
description: 'Contains the installed by action .NET SDK version for reuse.'
runs:
using: 'node24'
main: 'dist/setup/index.js'
post: 'dist/cache-save/index.js'
post-if: success()
Action ID: marketplace/amirisback/android-edit-text-input-filter
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-edit-text-input-filter
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/rhysd/download-artifact
Author: GitHub
Publisher: rhysd
Repository: github.com/rhysd/download-artifact
Download a build artifact that was previously uploaded in the workflow by the upload-artifact action
| Name | Required | Description |
|---|---|---|
name |
Optional | Name of the artifact to download. If unspecified, all artifacts for the run are downloaded. |
path |
Optional | Destination path. Supports basic tilde expansion. Defaults to $GITHUB_WORKSPACE |
pattern |
Optional | A glob pattern matching the artifacts that should be downloaded. Ignored if name is specified. |
merge-multiple |
Optional | When multiple artifacts are matched, this changes the behavior of the destination directories. If true, the downloaded artifacts will be in the same directory specified by path. If false, the downloaded artifacts will be extracted into individual named directories within the specified path. Default: false |
github-token |
Optional | The GitHub token used to authenticate with the GitHub API. This is required when downloading artifacts from a different repository or from a different workflow run. If this is not specified, the action will attempt to download artifacts from the current repository and the current workflow run. |
repository |
Optional | The repository owner and the repository name joined together by "/". If github-token is specified, this is the repository that artifacts will be downloaded from. Default: ${{ github.repository }} |
run-id |
Optional | The id of the workflow run where the desired download artifact was uploaded from. If github-token is specified, this is the run that artifacts will be downloaded from. Default: ${{ github.run_id }} |
| Name | Description |
|---|---|
download-path |
Path of artifact download |
name: 'Download a Build Artifact'
description: 'Download a build artifact that was previously uploaded in the workflow by the upload-artifact action'
author: 'GitHub'
inputs:
name:
description: 'Name of the artifact to download. If unspecified, all artifacts for the run are downloaded.'
required: false
path:
description: 'Destination path. Supports basic tilde expansion. Defaults to $GITHUB_WORKSPACE'
required: false
pattern:
description: 'A glob pattern matching the artifacts that should be downloaded. Ignored if name is specified.'
required: false
merge-multiple:
description: 'When multiple artifacts are matched, this changes the behavior of the destination directories.
If true, the downloaded artifacts will be in the same directory specified by path.
If false, the downloaded artifacts will be extracted into individual named directories within the specified path.'
required: false
default: 'false'
github-token:
description: 'The GitHub token used to authenticate with the GitHub API.
This is required when downloading artifacts from a different repository or from a different workflow run.
If this is not specified, the action will attempt to download artifacts from the current repository and the current workflow run.'
required: false
repository:
description: 'The repository owner and the repository name joined together by "/".
If github-token is specified, this is the repository that artifacts will be downloaded from.'
required: false
default: ${{ github.repository }}
run-id:
description: 'The id of the workflow run where the desired download artifact was uploaded from.
If github-token is specified, this is the run that artifacts will be downloaded from.'
required: false
default: ${{ github.run_id }}
outputs:
download-path:
description: 'Path of artifact download'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/aws-actions/configure-aws-credentials
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/configure-aws-credentials
Configures AWS credentials for use in subsequent steps in a GitHub Action workflow
| Name | Required | Description |
|---|---|---|
aws-region |
Required | AWS Region, e.g. us-east-2 |
role-to-assume |
Optional | The Amazon Resource Name (ARN) of the role to assume. Use the provided credentials to assume an IAM role and configure the Actions environment with the assumed role credentials rather than with the provided credentials. |
aws-access-key-id |
Optional | AWS Access Key ID. Provide this key if you want to assume a role using access keys rather than a web identity token. |
aws-secret-access-key |
Optional | AWS Secret Access Key. Required if aws-access-key-id is provided. |
aws-session-token |
Optional | AWS Session Token. |
web-identity-token-file |
Optional | Use the web identity token file from the provided file system path in order to assume an IAM role using a web identity, e.g. from within an Amazon EKS worker node. |
role-chaining |
Optional | Use existing credentials from the environment to assume a new role, rather than providing credentials as input. |
audience |
Optional | The audience to use for the OIDC provider Default: sts.amazonaws.com |
http-proxy |
Optional | Proxy to use for the AWS SDK agent |
no-proxy |
Optional | Hosts to skip for the proxy configuration |
mask-aws-account-id |
Optional | Whether to mask the AWS account ID for these credentials as a secret value. By default the account ID will not be masked |
role-duration-seconds |
Optional | Role duration in seconds. Default is one hour. |
role-external-id |
Optional | The external ID of the role to assume. |
role-session-name |
Optional | Role session name (default: GitHubActions) |
role-skip-session-tagging |
Optional | Skip session tagging during role assumption |
inline-session-policy |
Optional | Define an inline session policy to use when assuming a role |
managed-session-policies |
Optional | Define a list of managed session policies to use when assuming a role |
output-credentials |
Optional | Whether to set credentials as step output |
output-env-credentials |
Optional | Whether to export credentials as environment variables. If you set this to false, you probably want to use output-credentials. Default: True |
unset-current-credentials |
Optional | Whether to unset the existing credentials in your runner. May be useful if you run this action multiple times in the same job |
disable-retry |
Optional | Whether to disable the retry and backoff mechanism when the assume role call fails. By default the retry mechanism is enabled |
retry-max-attempts |
Optional | The maximum number of attempts it will attempt to retry the assume role call. By default it will retry 12 times |
special-characters-workaround |
Optional | Some environments do not support special characters in AWS_SECRET_ACCESS_KEY. This option will retry fetching credentials until the secret access key does not contain special characters. This option overrides disable-retry and retry-max-attempts. This option is disabled by default |
use-existing-credentials |
Optional | When enabled, this option will check if there are already valid credentials in the environment. If there are, new credentials will not be fetched. If there are not, the action will run as normal. |
allowed-account-ids |
Optional | An option comma-delimited list of expected AWS account IDs. The action will fail if we receive credentials for the wrong account. |
force-skip-oidc |
Optional | When enabled, this option will skip using GitHub OIDC provider even if the id-token permission is set. This is sometimes useful when using IAM instance credentials. |
action-timeout-s |
Optional | A global timeout in seconds for the action. When the timeout is reached, the action immediately exits. The default is to run without a timeout. |
| Name | Description |
|---|---|
aws-account-id |
The AWS account ID for the provided credentials |
aws-access-key-id |
The AWS access key ID for the provided credentials |
aws-secret-access-key |
The AWS secret access key for the provided credentials |
aws-session-token |
The AWS session token for the provided credentials |
aws-expiration |
The expiration time for the provided credentials |
name: '"Configure AWS Credentials" Action for GitHub Actions'
description: Configures AWS credentials for use in subsequent steps in a GitHub Action workflow
runs:
using: node20
main: dist/index.js
post: dist/cleanup/index.js
branding:
color: orange
icon: cloud
inputs:
aws-region:
description: AWS Region, e.g. us-east-2
required: true
role-to-assume:
description: The Amazon Resource Name (ARN) of the role to assume. Use the provided credentials to assume an IAM role and configure the Actions environment with the assumed role credentials rather than with the provided credentials.
required: false
aws-access-key-id:
description: AWS Access Key ID. Provide this key if you want to assume a role using access keys rather than a web identity token.
required: false
aws-secret-access-key:
description: AWS Secret Access Key. Required if aws-access-key-id is provided.
required: false
aws-session-token:
description: AWS Session Token.
required: false
web-identity-token-file:
description: Use the web identity token file from the provided file system path in order to assume an IAM role using a web identity, e.g. from within an Amazon EKS worker node.
required: false
role-chaining:
description: Use existing credentials from the environment to assume a new role, rather than providing credentials as input.
required: false
audience:
description: The audience to use for the OIDC provider
required: false
default: sts.amazonaws.com
http-proxy:
description: Proxy to use for the AWS SDK agent
required: false
no-proxy:
description: Hosts to skip for the proxy configuration
required: false
mask-aws-account-id:
description: Whether to mask the AWS account ID for these credentials as a secret value. By default the account ID will not be masked
required: false
role-duration-seconds:
description: Role duration in seconds. Default is one hour.
required: false
role-external-id:
description: The external ID of the role to assume.
required: false
role-session-name:
description: "Role session name (default: GitHubActions)"
required: false
role-skip-session-tagging:
description: Skip session tagging during role assumption
required: false
inline-session-policy:
description: Define an inline session policy to use when assuming a role
required: false
managed-session-policies:
description: Define a list of managed session policies to use when assuming a role
required: false
output-credentials:
description: Whether to set credentials as step output
required: false
output-env-credentials:
description: Whether to export credentials as environment variables. If you set this to false, you probably want to use output-credentials.
required: false
default: true
unset-current-credentials:
description: Whether to unset the existing credentials in your runner. May be useful if you run this action multiple times in the same job
required: false
disable-retry:
description: Whether to disable the retry and backoff mechanism when the assume role call fails. By default the retry mechanism is enabled
required: false
retry-max-attempts:
description: The maximum number of attempts it will attempt to retry the assume role call. By default it will retry 12 times
required: false
special-characters-workaround:
description: Some environments do not support special characters in AWS_SECRET_ACCESS_KEY. This option will retry fetching credentials until the secret access key does not contain special characters. This option overrides disable-retry and retry-max-attempts. This option is disabled by default
required: false
use-existing-credentials:
required: false
description: When enabled, this option will check if there are already valid credentials in the environment. If there are, new credentials will not be fetched. If there are not, the action will run as normal.
allowed-account-ids:
required: false
description: An option comma-delimited list of expected AWS account IDs. The action will fail if we receive credentials for the wrong account.
force-skip-oidc:
required: false
description: When enabled, this option will skip using GitHub OIDC provider even if the id-token permission is set. This is sometimes useful when using IAM instance credentials.
action-timeout-s:
required: false
description: A global timeout in seconds for the action. When the timeout is reached, the action immediately exits. The default is to run without a timeout.
outputs:
aws-account-id:
description: The AWS account ID for the provided credentials
aws-access-key-id:
description: The AWS access key ID for the provided credentials
aws-secret-access-key:
description: The AWS secret access key for the provided credentials
aws-session-token:
description: The AWS session token for the provided credentials
aws-expiration:
description: The expiration time for the provided credentials
Action ID: marketplace/axel-op/package-java-aws-lambda
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/package-java-aws-lambda
Package an AWS Lambda in Java
| Name | Required | Description |
|---|---|---|
working-directory |
Optional | The root directory of the function Default: . |
| Name | Description |
|---|---|
deployment-directory |
The directory containing the files to be deployed |
deployment-file |
The jar file to be deployed |
name: "Package an AWS Lambda in Java"
description: "Package an AWS Lambda in Java"
branding:
icon: package
color: red
inputs:
working-directory:
description: "The root directory of the function"
required: false
default: "."
outputs:
deployment-directory:
description: "The directory containing the files to be deployed"
value: ${{ steps.package.outputs.deployment-directory }}
deployment-file:
description: "The jar file to be deployed"
value: ${{ steps.package.outputs.deployment-file }}
runs:
using: "composite"
steps:
- name: Package
id: package
working-directory: ${{ inputs.working-directory }}
run: "${GITHUB_ACTION_PATH}/package.sh"
shell: bash
env:
DEPLOYMENT_DIR: deployment
Action ID: marketplace/mheap/github-action-issue-management
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-issue-management
Automatically manage issue comments
| Name | Required | Description |
|---|---|---|
necromancer_delay |
Optional | The amount of time that an issue must be closed before new comments are considered necromancy Default: P7D |
disable_auto_assign |
Optional | Set to 'on' to disable automatic issue assignment when a comment is added Default: off |
name: Issue Management
description: Automatically manage issue comments
runs:
using: docker
image: Dockerfile
branding:
icon: tag
color: blue
inputs:
necromancer_delay:
default: "P7D"
description: "The amount of time that an issue must be closed before new comments are considered necromancy"
required: false
disable_auto_assign:
default: "off"
description: "Set to 'on' to disable automatic issue assignment when a comment is added"
required: false
Action ID: marketplace/google-github-actions/run-gemini-cli
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/run-gemini-cli
Invoke the Gemini CLI from a GitHub Action.
| Name | Required | Description |
|---|---|---|
gcp_location |
Optional | The Google Cloud location. |
gcp_project_id |
Optional | The Google Cloud project ID. |
gcp_service_account |
Optional | The Google Cloud service account email. |
gcp_workload_identity_provider |
Optional | The Google Cloud Workload Identity Provider. |
gcp_token_format |
Optional | The token format for authentication. Set to "access_token" to generate access tokens (requires service account), or set to empty string for direct WIF. Can be "access_token" or "id_token". Default: access_token |
gcp_access_token_scopes |
Optional | The access token scopes when using token_format "access_token". Comma-separated list of OAuth 2.0 scopes. Default: https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/userinfo.profile |
gemini_api_key |
Optional | The API key for the Gemini API. |
gemini_cli_version |
Optional | The version of the Gemini CLI to install. Can be "latest", "preview", "nightly", a specific version number, or a git branch, tag, or commit. For more information, see [Gemini CLI releases](https://github.com/google-gemini/gemini-cli/blob/main/docs/releases.md). Default: latest |
gemini_debug |
Optional | Enable debug logging and output streaming. |
gemini_model |
Optional | The model to use with Gemini. |
google_api_key |
Optional | The Vertex AI API key to use with Gemini. |
prompt |
Optional | A string passed to the Gemini CLI's [`--prompt` argument](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#command-line-arguments). Default: You are a helpful assistant. |
settings |
Optional | A JSON string written to `.gemini/settings.json` to configure the CLI's _project_ settings. For more details, see the documentation on [settings files](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#settings-files). |
use_gemini_code_assist |
Optional | Whether to use Code Assist for Gemini model access instead of the default Gemini API key.
For more information, see the [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/authentication.md). Default: false |
use_vertex_ai |
Optional | Whether to use Vertex AI for Gemini model access instead of the default Gemini API key.
For more information, see the [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/authentication.md). Default: false |
extensions |
Optional | A list of Gemini CLI extensions to install. |
upload_artifacts |
Optional | Whether to upload artifacts to the github action. Default: false |
use_pnpm |
Optional | Whether or not to use pnpm instead of npm to install gemini-cli Default: false |
workflow_name |
Optional | The GitHub workflow name, used for telemetry purposes. Default: ${{ github.workflow }} |
| Name | Description |
|---|---|
summary |
The summarized output from the Gemini CLI execution. |
error |
The error output from the Gemini CLI execution, if any. |
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Run Gemini CLI'
author: 'Google LLC'
description: |-
Invoke the Gemini CLI from a GitHub Action.
inputs:
gcp_location:
description: 'The Google Cloud location.'
required: false
gcp_project_id:
description: 'The Google Cloud project ID.'
required: false
gcp_service_account:
description: 'The Google Cloud service account email.'
required: false
gcp_workload_identity_provider:
description: 'The Google Cloud Workload Identity Provider.'
required: false
gcp_token_format:
description: 'The token format for authentication. Set to "access_token" to generate access tokens (requires service account), or set to empty string for direct WIF. Can be "access_token" or "id_token".'
required: false
default: 'access_token'
gcp_access_token_scopes:
description: 'The access token scopes when using token_format "access_token". Comma-separated list of OAuth 2.0 scopes.'
required: false
default: 'https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/userinfo.profile'
gemini_api_key:
description: 'The API key for the Gemini API.'
required: false
gemini_cli_version:
description: 'The version of the Gemini CLI to install. Can be "latest", "preview", "nightly", a specific version number, or a git branch, tag, or commit. For more information, see [Gemini CLI releases](https://github.com/google-gemini/gemini-cli/blob/main/docs/releases.md).'
required: false
default: 'latest'
gemini_debug:
description: 'Enable debug logging and output streaming.'
required: false
gemini_model:
description: 'The model to use with Gemini.'
required: false
google_api_key:
description: 'The Vertex AI API key to use with Gemini.'
required: false
prompt:
description: |-
A string passed to the Gemini CLI's [`--prompt` argument](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#command-line-arguments).
required: false
default: 'You are a helpful assistant.'
settings:
description: |-
A JSON string written to `.gemini/settings.json` to configure the CLI's _project_ settings.
For more details, see the documentation on [settings files](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#settings-files).
required: false
use_gemini_code_assist:
description: |-
Whether to use Code Assist for Gemini model access instead of the default Gemini API key.
For more information, see the [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/authentication.md).
required: false
default: 'false'
use_vertex_ai:
description: |-
Whether to use Vertex AI for Gemini model access instead of the default Gemini API key.
For more information, see the [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/authentication.md).
required: false
default: 'false'
extensions:
description: 'A list of Gemini CLI extensions to install.'
required: false
upload_artifacts:
description: 'Whether to upload artifacts to the github action.'
required: false
default: 'false'
use_pnpm:
description: 'Whether or not to use pnpm instead of npm to install gemini-cli'
required: false
default: 'false'
workflow_name:
description: 'The GitHub workflow name, used for telemetry purposes.'
required: false
default: '${{ github.workflow }}'
outputs:
summary:
description: 'The summarized output from the Gemini CLI execution.'
value: '${{ steps.gemini_run.outputs.gemini_response }}'
error:
description: 'The error output from the Gemini CLI execution, if any.'
value: '${{ steps.gemini_run.outputs.gemini_errors }}'
runs:
using: 'composite'
steps:
- name: 'Validate Inputs'
id: 'validate_inputs'
shell: 'bash'
run: |-
set -exuo pipefail
# Emit a clear warning in three places without failing the step
warn() {
local msg="$1"
echo "WARNING: ${msg}" >&2
echo "::warning title=Input validation::${msg}"
if [[ -n "${GITHUB_STEP_SUMMARY:-}" ]]; then
{
echo "### Input validation warnings"
echo
echo "- ${msg}"
} >> "${GITHUB_STEP_SUMMARY}"
fi
}
# Validate the count of authentication methods
auth_methods=0
if [[ "${INPUT_GEMINI_API_KEY_PRESENT:-false}" == "true" ]]; then ((++auth_methods)); fi
if [[ "${INPUT_GOOGLE_API_KEY_PRESENT:-false}" == "true" ]]; then ((++auth_methods)); fi
if [[ "${INPUT_GCP_WORKLOAD_IDENTITY_PROVIDER_PRESENT:-false}" == "true" ]]; then ((++auth_methods)); fi
if [[ ${auth_methods} -eq 0 ]]; then
warn "No authentication method provided. Please provide one of 'gemini_api_key', 'google_api_key', or 'gcp_workload_identity_provider'."
fi
if [[ ${auth_methods} -gt 1 ]]; then
warn "Multiple authentication methods provided. Please use only one of 'gemini_api_key', 'google_api_key', or 'gcp_workload_identity_provider'."
fi
# Validate Workload Identity Federation inputs
if [[ "${INPUT_GCP_WORKLOAD_IDENTITY_PROVIDER_PRESENT:-false}" == "true" ]]; then
if [[ "${INPUT_GCP_PROJECT_ID_PRESENT:-false}" != "true" ]]; then
warn "When using Workload Identity Federation ('gcp_workload_identity_provider'), you must also provide 'gcp_project_id'."
fi
# Service account is required when using token_format (default behavior)
# Only optional when explicitly set to empty for direct WIF
if [[ "${INPUT_GCP_TOKEN_FORMAT}" != "" && "${INPUT_GCP_SERVICE_ACCOUNT_PRESENT:-false}" != "true" ]]; then
warn "When using Workload Identity Federation with token generation ('gcp_token_format'), you must also provide 'gcp_service_account'. To use direct WIF without a service account, explicitly set 'gcp_token_format' to an empty string."
fi
if [[ "${INPUT_USE_VERTEX_AI:-false}" == "${INPUT_USE_GEMINI_CODE_ASSIST:-false}" ]]; then
warn "When using Workload Identity Federation, you must set exactly one of 'use_vertex_ai' or 'use_gemini_code_assist' to 'true'."
fi
fi
# Validate Vertex AI API Key
if [[ "${INPUT_GOOGLE_API_KEY_PRESENT:-false}" == "true" ]]; then
if [[ "${INPUT_USE_VERTEX_AI:-false}" != "true" ]]; then
warn "When using 'google_api_key', you must set 'use_vertex_ai' to 'true'."
fi
if [[ "${INPUT_USE_GEMINI_CODE_ASSIST:-false}" == "true" ]]; then
warn "When using 'google_api_key', 'use_gemini_code_assist' cannot be 'true'."
fi
fi
# Validate Gemini API Key
if [[ "${INPUT_GEMINI_API_KEY_PRESENT:-false}" == "true" ]]; then
if [[ "${INPUT_USE_VERTEX_AI:-false}" == "true" || "${INPUT_USE_GEMINI_CODE_ASSIST:-false}" == "true" ]]; then
warn "When using 'gemini_api_key', both 'use_vertex_ai' and 'use_gemini_code_assist' must be 'false'."
fi
fi
env:
INPUT_GEMINI_API_KEY_PRESENT: "${{ inputs.gemini_api_key != '' }}"
INPUT_GOOGLE_API_KEY_PRESENT: "${{ inputs.google_api_key != '' }}"
INPUT_GCP_WORKLOAD_IDENTITY_PROVIDER_PRESENT: "${{ inputs.gcp_workload_identity_provider != '' }}"
INPUT_GCP_PROJECT_ID_PRESENT: "${{ inputs.gcp_project_id != '' }}"
INPUT_GCP_SERVICE_ACCOUNT_PRESENT: "${{ inputs.gcp_service_account != '' }}"
INPUT_GCP_TOKEN_FORMAT: '${{ inputs.gcp_token_format }}'
INPUT_USE_VERTEX_AI: '${{ inputs.use_vertex_ai }}'
INPUT_USE_GEMINI_CODE_ASSIST: '${{ inputs.use_gemini_code_assist }}'
- name: 'Sanitize workflow name'
id: 'sanitize_workflow_name'
shell: 'bash'
run: |
SANITIZED=$(echo "${WORKFLOW_NAME}" | sed 's/[^ a-zA-Z0-9-]//g' | xargs | tr ' ' '_' | tr '[:upper:]' '[:lower:]')
echo "gh_workflow_name=$SANITIZED" >> $GITHUB_OUTPUT
env:
WORKFLOW_NAME: '${{ inputs.workflow_name }}'
- name: 'Configure Gemini CLI'
if: |-
${{ inputs.settings != '' }}
run: |-
mkdir -p .gemini/
echo "${SETTINGS}" > ".gemini/settings.json"
shell: 'bash'
env:
SETTINGS: '${{ inputs.settings }}'
- name: 'Install Custom Commands'
shell: 'bash'
run: |-
set -euo pipefail
mkdir -p .gemini/commands
cp -r "${GITHUB_ACTION_PATH}/.github/commands/"* .gemini/commands/
env:
GITHUB_ACTION_PATH: '${{ github.action_path }}'
- name: 'Authenticate to Google Cloud'
if: |-
${{ inputs.gcp_workload_identity_provider != '' }}
id: 'auth'
uses: 'google-github-actions/auth@v2' # ratchet:exclude
with:
project_id: '${{ inputs.gcp_project_id }}'
workload_identity_provider: '${{ inputs.gcp_workload_identity_provider }}'
service_account: '${{ inputs.gcp_service_account }}'
token_format: '${{ inputs.gcp_token_format }}'
access_token_scopes: '${{ inputs.gcp_access_token_scopes }}'
- name: 'Install pnpm'
if: |-
${{ inputs.use_pnpm == 'true' }}
uses: 'pnpm/action-setup@41ff72655975bd51cab0327fa583b6e92b6d3061' # ratchet:pnpm/action-setup@v4
with:
version: 10
- name: 'Install Gemini CLI'
id: 'install'
env:
GEMINI_CLI_VERSION: '${{ inputs.gemini_cli_version }}'
EXTENSIONS: '${{ inputs.extensions }}'
USE_PNPM: '${{ inputs.use_pnpm }}'
shell: 'bash'
run: |-
set -euo pipefail
VERSION_INPUT="${GEMINI_CLI_VERSION:-latest}"
if [[ "${VERSION_INPUT}" == "latest" || "${VERSION_INPUT}" == "preview" || "${VERSION_INPUT}" == "nightly" || "${VERSION_INPUT}" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9\.-]+)?(\+[a-zA-Z0-9\.-]+)?$ ]]; then
echo "Installing Gemini CLI from npm: @google/gemini-cli@${VERSION_INPUT}"
if [[ "${USE_PNPM}" == "true" ]]; then
pnpm add --silent --global @google/gemini-cli@"${VERSION_INPUT}"
else
npm install --silent --no-audit --prefer-offline --global @google/gemini-cli@"${VERSION_INPUT}"
fi
else
echo "Installing Gemini CLI from GitHub: github:google-gemini/gemini-cli#${VERSION_INPUT}"
git clone https://github.com/google-gemini/gemini-cli.git
cd gemini-cli
git checkout "${VERSION_INPUT}"
npm install
npm run bundle
npm install --silent --no-audit --prefer-offline --global .
fi
echo "Verifying installation:"
if command -v gemini >/dev/null 2>&1; then
gemini --version || echo "Gemini CLI installed successfully (version command not available)"
else
echo "Error: Gemini CLI not found in PATH"
exit 1
fi
if [[ -n "${EXTENSIONS}" ]]; then
echo "Installing Gemini CLI extensions:"
echo "${EXTENSIONS}" | jq -r '.[]' | while IFS= read -r extension; do
extension=$(echo "${extension}" | xargs)
if [[ -n "${extension}" ]]; then
echo "Installing ${extension}..."
echo "Y" | gemini extensions install "${extension}"
fi
done
fi
- name: 'Run Gemini CLI'
id: 'gemini_run'
shell: 'bash'
run: |-
set -euo pipefail
# Create a temporary directory for storing the output, and ensure it's
# cleaned up later
TEMP_STDOUT="$(mktemp -p "${RUNNER_TEMP}" gemini-out.XXXXXXXXXX)"
TEMP_STDERR="$(mktemp -p "${RUNNER_TEMP}" gemini-err.XXXXXXXXXX)"
function cleanup {
rm -f "${TEMP_STDOUT}" "${TEMP_STDERR}"
}
trap cleanup EXIT
# Keep track of whether we've failed
FAILED=false
# Run Gemini CLI with the provided prompt, using JSON output format
# We capture stdout (JSON) to TEMP_STDOUT and stderr to TEMP_STDERR
if [[ "${GEMINI_DEBUG}" = true ]]; then
echo "::warning::Gemini CLI debug logging is enabled. This will stream responses, which could reveal sensitive information if processed with untrusted inputs."
echo "::: Start Gemini CLI STDOUT :::"
if ! gemini --debug --yolo --prompt "${PROMPT}" --output-format json 2> >(tee "${TEMP_STDERR}" >&2) | tee "${TEMP_STDOUT}"; then
FAILED=true
fi
# Wait for async stderr logging to complete. This is because process substitution in Bash is async so let tee finish writing to ${TEMP_STDERR}
sleep 1
echo "::: End Gemini CLI STDOUT :::"
else
if ! gemini --yolo --prompt "${PROMPT}" --output-format json 2> "${TEMP_STDERR}" 1> "${TEMP_STDOUT}"; then
FAILED=true
fi
fi
# Create the artifacts directory and copy full logs
mkdir -p gemini-artifacts
cp "${TEMP_STDOUT}" gemini-artifacts/stdout.log
cp "${TEMP_STDERR}" gemini-artifacts/stderr.log
if [[ -f .gemini/telemetry.log ]]; then
cp .gemini/telemetry.log gemini-artifacts/telemetry.log
else
# Create an empty file so the artifact upload doesn't fail if telemetry is missing
touch gemini-artifacts/telemetry.log
fi
# Parse JSON output to extract response and errors
# If output is not valid JSON, RESPONSE will be empty and we'll rely on stderr for errors
RESPONSE=""
ERROR_JSON=""
if jq -e . "${TEMP_STDOUT}" >/dev/null 2>&1; then
RESPONSE=$(jq -r '.response // ""' "${TEMP_STDOUT}")
fi
if jq -e . "${TEMP_STDERR}" >/dev/null 2>&1; then
ERROR_JSON=$(jq -c '.error // empty' "${TEMP_STDERR}")
fi
if ! jq -e . "${TEMP_STDOUT}" >/dev/null 2>&1; then
echo "::warning::Gemini CLI output was not valid JSON"
# If we failed to parse JSON and the command didn't fail, this is likely a silent failure (e.g. resource limit)
if [[ "${FAILED}" == "false" ]]; then
echo "::error title=Gemini CLI execution failed::Gemini CLI produced invalid or empty JSON output, which usually indicates a silent failure."
FAILED=true
fi
fi
# Set the captured response as a step output, supporting multiline
echo "gemini_response<<EOF" >> "${GITHUB_OUTPUT}"
if [[ -n "${RESPONSE}" ]]; then
echo "${RESPONSE}" >> "${GITHUB_OUTPUT}"
else
cat "${TEMP_STDOUT}" >> "${GITHUB_OUTPUT}"
fi
echo "EOF" >> "${GITHUB_OUTPUT}"
# Set the captured errors as a step output, supporting multiline
echo "gemini_errors<<EOF" >> "${GITHUB_OUTPUT}"
if [[ -n "${ERROR_JSON}" ]]; then
echo "${ERROR_JSON}" >> "${GITHUB_OUTPUT}"
else
cat "${TEMP_STDERR}" >> "${GITHUB_OUTPUT}"
fi
echo "EOF" >> "${GITHUB_OUTPUT}"
if [[ "${FAILED}" = true ]]; then
# If we have a structured error from JSON, use it for the error message
if [[ -n "${ERROR_JSON}" ]]; then
ERROR_MSG=$(jq -r '.message // .' <<< "${ERROR_JSON}")
echo "::error title=Gemini CLI execution failed::${ERROR_MSG}"
fi
echo "::: Start Gemini CLI STDERR :::"
cat "${TEMP_STDERR}"
echo "::: End Gemini CLI STDERR :::"
exit 1
fi
env:
GEMINI_DEBUG: '${{ fromJSON(inputs.gemini_debug || false) }}'
GEMINI_API_KEY: '${{ inputs.gemini_api_key }}'
SURFACE: 'GitHub'
GOOGLE_CLOUD_PROJECT: '${{ inputs.gcp_project_id }}'
GOOGLE_CLOUD_LOCATION: '${{ inputs.gcp_location }}'
GOOGLE_GENAI_USE_VERTEXAI: '${{ inputs.use_vertex_ai }}'
GOOGLE_API_KEY: '${{ inputs.google_api_key }}'
GOOGLE_GENAI_USE_GCA: '${{ inputs.use_gemini_code_assist }}'
GOOGLE_CLOUD_ACCESS_TOKEN: '${{steps.auth.outputs.access_token}}'
PROMPT: '${{ inputs.prompt }}'
GEMINI_MODEL: '${{ inputs.gemini_model }}'
GH_WORKFLOW_NAME: '${{ steps.sanitize_workflow_name.outputs.gh_workflow_name }}'
- name: 'Upload Gemini CLI outputs'
if: |-
${{ inputs.upload_artifacts == 'true' }}
uses: 'actions/upload-artifact@v4' # ratchet:exclude
with:
name: 'gemini-output'
path: 'gemini-artifacts/'
- name: 'Upload Telemetry to Google Cloud'
if: |-
${{ inputs.gcp_workload_identity_provider != '' }}
shell: 'bash'
run: |-
set -euo pipefail
# If the telemetry log doesn't exist or is empty, do nothing.
if [[ ! -s ".gemini/telemetry.log" ]]; then
echo "No telemetry log found, skipping upload."
exit 0
fi
# Generate the real config file from the template
sed -e "s#OTLP_GOOGLE_CLOUD_PROJECT#${OTLP_GOOGLE_CLOUD_PROJECT}#g" \
-e "s#GITHUB_REPOSITORY_PLACEHOLDER#${GITHUB_REPOSITORY}#g" \
-e "s#GITHUB_RUN_ID_PLACEHOLDER#${GITHUB_RUN_ID}#g" \
"${GITHUB_ACTION_PATH}/scripts/collector-gcp.yaml.template" > ".gemini/collector-gcp.yaml"
# Ensure credentials file has the right permissions
chmod 444 "$GOOGLE_APPLICATION_CREDENTIALS"
# Run the collector in the background with a known name
docker run --rm --name gemini-telemetry-collector --network host \
-v "${GITHUB_WORKSPACE}:/github/workspace" \
-e "GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS/$GITHUB_WORKSPACE//github/workspace}" \
-w "/github/workspace" \
otel/opentelemetry-collector-contrib:0.108.0 \
--config /github/workspace/.gemini/collector-gcp.yaml &
# Wait for the collector to start up
echo "Waiting for collector to initialize..."
sleep 10
# Monitor the queue until it's empty or we time out
echo "Monitoring exporter queue..."
ATTEMPTS=0
MAX_ATTEMPTS=12 # 12 * 5s = 60s timeout
while true; do
# Use -f to fail silently if the server isn't ready yet
# Filter out the prometheus help/type comments before grabbing the value
QUEUE_SIZE=$(curl -sf http://localhost:8888/metrics | grep otelcol_exporter_queue_size | grep -v '^#' | awk '{print $2}' || echo "-1")
if [ "$QUEUE_SIZE" == "0" ]; then
echo "Exporter queue is empty, all data processed."
break
fi
if [ "$ATTEMPTS" -ge "$MAX_ATTEMPTS" ]; then
echo "::warning::Timed out waiting for exporter queue to empty. Proceeding with shutdown."
break
fi
echo "Queue size: $QUEUE_SIZE, waiting..."
sleep 5
ATTEMPTS=$((ATTEMPTS + 1))
done
# Gracefully shut down the collector
echo "Stopping collector..."
docker stop gemini-telemetry-collector
echo "Collector stopped."
env:
OTLP_GOOGLE_CLOUD_PROJECT: '${{ inputs.gcp_project_id }}'
GITHUB_ACTION_PATH: '${{ github.action_path }}'
GITHUB_REPOSITORY: '${{ github.repository }}'
GITHUB_RUN_ID: '${{ github.run_id }}'
branding:
icon: 'terminal'
color: 'blue'
Action ID: marketplace/google-github-actions/get-secretmanager-secrets
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/get-secretmanager-secrets
Get secrets from Google Secret Manager and make their results available as output variables.
| Name | Required | Description |
|---|---|---|
secrets |
Required | List of secrets to access and inject into the environment. These are comma-separated or newline-separated `OUTPUTNAME:SECRET`. Output names or secret names that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml secrets: |- output1:my-project/my-secret1 output2:my-project/my-secret2 ``` Secrets can be referenced using the following formats: ```text # Long form projects/<project-id>/secrets/<secret-id>/versions/<version-id> # Long form - "latest" version projects/<project-id>/secrets/<secret-id> # Short form <project-id>/<secret-id>/<version-id> # Short form - "latest" version <project-id>/<secret-id> ``` |
min_mask_length |
Optional | Minimum line length for a secret to be masked. Extremely short secrets
(e.g. `{` or `a`) can make GitHub Actions log output unreadable. This is
especially important for multi-line secrets, since each line of the secret
is masked independently. Default: 4 |
export_to_environment |
Optional | Make the fetched secrets additionally available as environment variables. |
encoding |
Optional | Encoding in which secrets will be exported into outputs (and environment
variables if `export_to_environment` is true). For secrets that cannot be
represented in text, such as encryption key bytes, choose an encoding that
has a safe character such as `base64` or `hex`. For more information about
available encoding types, please see the [Node.js Buffer and character
encodings](https://nodejs.org/docs/latest/api/buffer.html#buffers-and-character-encodings). Default: utf8 |
universe |
Optional | The Google Cloud universe to use for constructing API endpoints. The
default universe is "googleapis.com", which corresponds to
https://cloud.google.com. Trusted Partner Cloud and Google Distributed
Hosted Cloud should set this to their universe address. Default: googleapis.com |
| Name | Description |
|---|---|
secrets |
Each secret is prefixed with an output name. The secret's resolved access value will be available at that output in future build steps. For example: ```yaml jobs: job_id: steps: - id: 'secrets' uses: 'google-github-actions/get-secretmanager-secrets@v3' with: secrets: |- token:my-project/docker-registry-token ``` will be available in future steps as the output: ```text steps.secrets.outputs.token ``` |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Get Secret Manager secrets'
author: 'Google LLC'
description: |-
Get secrets from Google Secret Manager and make their results available as
output variables.
inputs:
secrets:
description: |-
List of secrets to access and inject into the environment. These are
comma-separated or newline-separated `OUTPUTNAME:SECRET`. Output names or
secret names that contain separators must be escaped with a backslash
(e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is
trimmed unless values are quoted.
```yaml
secrets: |-
output1:my-project/my-secret1
output2:my-project/my-secret2
```
Secrets can be referenced using the following formats:
```text
# Long form
projects/<project-id>/secrets/<secret-id>/versions/<version-id>
# Long form - "latest" version
projects/<project-id>/secrets/<secret-id>
# Short form
<project-id>/<secret-id>/<version-id>
# Short form - "latest" version
<project-id>/<secret-id>
```
required: true
min_mask_length:
description: |-
Minimum line length for a secret to be masked. Extremely short secrets
(e.g. `{` or `a`) can make GitHub Actions log output unreadable. This is
especially important for multi-line secrets, since each line of the secret
is masked independently.
required: false
default: '4'
export_to_environment:
description: |-
Make the fetched secrets additionally available as environment variables.
required: false
default: false
encoding:
description: |-
Encoding in which secrets will be exported into outputs (and environment
variables if `export_to_environment` is true). For secrets that cannot be
represented in text, such as encryption key bytes, choose an encoding that
has a safe character such as `base64` or `hex`. For more information about
available encoding types, please see the [Node.js Buffer and character
encodings](https://nodejs.org/docs/latest/api/buffer.html#buffers-and-character-encodings).
required: false
default: 'utf8'
universe:
description: |-
The Google Cloud universe to use for constructing API endpoints. The
default universe is "googleapis.com", which corresponds to
https://cloud.google.com. Trusted Partner Cloud and Google Distributed
Hosted Cloud should set this to their universe address.
required: false
default: 'googleapis.com'
outputs:
secrets:
description: |-
Each secret is prefixed with an output name. The secret's resolved access
value will be available at that output in future build steps. For example:
```yaml
jobs:
job_id:
steps:
- id: 'secrets'
uses: 'google-github-actions/get-secretmanager-secrets@v3'
with:
secrets: |-
token:my-project/docker-registry-token
```
will be available in future steps as the output:
```text
steps.secrets.outputs.token
```
branding:
icon: 'lock'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/axel-op/self-contained-dart-github-action
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/self-contained-dart-github-action
Greet someone and record the time
| Name | Required | Description |
|---|---|---|
who-to-greet |
Optional | Who to greet |
| Name | Description |
|---|---|
time |
The time we greeted you |
name: "Hello World"
description: "Greet someone and record the time"
inputs:
who-to-greet: # id of input
description: "Who to greet"
required: false
outputs:
time: # id of output
description: "The time we greeted you"
runs:
using: "docker"
image: "Dockerfile.action"
args:
- ${{ inputs.who-to-greet }}
Action ID: marketplace/docker/login-action
Author: docker
Publisher: docker
Repository: github.com/docker/login-action
GitHub Action to login against a Docker registry
| Name | Required | Description |
|---|---|---|
registry |
Optional | Server address of Docker registry. If not set then will default to Docker Hub |
username |
Optional | Username used to log against the Docker registry |
password |
Optional | Password or personal access token used to log against the Docker registry |
ecr |
Optional | Specifies whether the given registry is ECR (auto, true or false) |
logout |
Optional | Log out from the Docker registry at the end of a job Default: true |
registry-auth |
Optional | Raw authentication to registries, defined as YAML objects |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Docker Login'
description: 'GitHub Action to login against a Docker registry'
author: 'docker'
branding:
icon: 'anchor'
color: 'blue'
inputs:
registry:
description: 'Server address of Docker registry. If not set then will default to Docker Hub'
required: false
username:
description: 'Username used to log against the Docker registry'
required: false
password:
description: 'Password or personal access token used to log against the Docker registry'
required: false
ecr:
description: 'Specifies whether the given registry is ECR (auto, true or false)'
required: false
logout:
description: 'Log out from the Docker registry at the end of a job'
default: 'true'
required: false
registry-auth:
description: 'Raw authentication to registries, defined as YAML objects'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/mheap/reviewed-by-trailer-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/reviewed-by-trailer-action
Automatically add a Reviewed-by trailer to Pull Requests
| Name | Required | Description |
|---|---|---|
states |
Optional | Comma delimited list of states to add the trailer on. Can be: approved, changes_requested, commented Default: approved |
name: Add Reviewed-by trailer
description: Automatically add a Reviewed-by trailer to Pull Requests
runs:
using: docker
image: Dockerfile
branding:
icon: thumbs-up
color: green
inputs:
states:
description: "Comma delimited list of states to add the trailer on. Can be: approved, changes_requested, commented"
default: approved
Action ID: marketplace/jidicula/clang-format-action
Author: Unknown
Publisher: jidicula
Repository: github.com/jidicula/clang-format-action
Use clang-format to see if your C/C++/Protobuf code is formatted according to project guidelines.
| Name | Required | Description |
|---|---|---|
clang-format-version |
Optional | The major version of clang-format that you want to use. Default: 21 |
check-path |
Optional | The path to the directory you want to check for correct C/C++/Protobuf formatting. Default is the full repository. Default: . |
exclude-regex |
Optional | A regex to exclude files or directories that should not be checked. Default is empty. |
fallback-style |
Optional | The fallback style for clang-format if no .clang-format file exists in your repository. Default: llvm |
include-regex |
Optional | A regex to override the C/C++/Protobuf/CUDA filetype regex. that should be checked. Default results in the regex defined in `check.sh`. |
name: "clang-format Check"
description: "Use clang-format to see if your C/C++/Protobuf code is formatted according to project guidelines."
branding:
icon: "check-circle"
color: "blue"
inputs:
clang-format-version:
description: 'The major version of clang-format that you want to use.'
required: false
default: '21'
check-path:
description: 'The path to the directory you want to check for correct C/C++/Protobuf formatting. Default is the full repository.'
required: false
default: '.'
exclude-regex:
description: 'A regex to exclude files or directories that should not be checked. Default is empty.'
required: false
default: ''
fallback-style:
description: 'The fallback style for clang-format if no .clang-format file exists in your repository.'
required: false
default: 'llvm'
include-regex:
description: 'A regex to override the C/C++/Protobuf/CUDA filetype regex. that should be checked. Default results in the regex defined in `check.sh`.'
required: false
default: ''
runs:
using: "composite"
steps:
- run: |
"${{ github.action_path }}/check.sh" "${{ inputs.clang-format-version }}" "${{ inputs.check-path }}" "${{ inputs.fallback-style }}" "${{ inputs.exclude-regex }}" "${{ inputs.include-regex }}"
shell: bash
- name: Save PR head commit SHA
if: failure() && github.event_name == 'pull_request'
shell: bash
run: |
SHA="${{ github.event.pull_request.head.sha }}"
echo "SHA=$SHA" >> $GITHUB_ENV
- name: Save latest commit SHA if not PR
if: failure() && github.event_name != 'pull_request'
shell: bash
run: echo "SHA=${{ github.sha }}" >> $GITHUB_ENV
- name: Report failure in job summary
if: failure()
run: |
DEEPLINK="${{ github.server_url }}/${{ github.repository }}/commit/${{ env.SHA }}"
echo -e "Format check failed on commit [${GITHUB_SHA:0:8}]($DEEPLINK) with files:\n$(<$GITHUB_WORKSPACE/failing-files.txt)" >> $GITHUB_STEP_SUMMARY
shell: bash
Action ID: marketplace/amirisback/frogo-recycler-view
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/frogo-recycler-view
Library Easy FrogoRecyclerView Based on RecyclerView, Full and Clear Documentation, Have Empty View :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Recycler-View'
description: 'Library Easy FrogoRecyclerView Based on RecyclerView, Full and Clear Documentation, Have Empty View :books:'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/dflook/tofu-unlock-state
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-unlock-state
Force unlocks an OpenTofu remote state.
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the OpenTofu root module that defines the remote state to unlock Default: . |
workspace |
Optional | OpenTofu workspace to unlock the remote state for Default: default |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
lock_id |
Required | The ID of the state lock to release. |
name: tofu-unlock-state
description: Force unlocks an OpenTofu remote state.
author: Daniel Flook
inputs:
path:
description: Path to the OpenTofu root module that defines the remote state to unlock
required: false
default: "."
workspace:
description: OpenTofu workspace to unlock the remote state for
required: false
default: "default"
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
lock_id:
description: The ID of the state lock to release.
required: true
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/unlock-state.sh
branding:
icon: globe
color: purple
Action ID: marketplace/Readme-Workflows/recent-activity
Author: jamesgeorge007, abhijoshi2k, PuneetGopinath, Andre601
Publisher: Readme-Workflows
Repository: github.com/Readme-Workflows/recent-activity
Add your recent activity to your profile readme!
| Name | Required | Description |
|---|---|---|
GH_USERNAME |
Optional | Your GitHub username Default: ${{ github.repository_owner }} |
CONFIG_FILE |
Optional | Path to configuration file Default: ./.github/recent-activity.config.yml |
name: Recent GitHub Activity - Profile Readme
description: Add your recent activity to your profile readme!
author: jamesgeorge007, abhijoshi2k, PuneetGopinath, Andre601
inputs:
GH_USERNAME:
description: "Your GitHub username"
default: ${{ github.repository_owner }}
required: false
CONFIG_FILE:
description: "Path to configuration file"
default: "./.github/recent-activity.config.yml"
required: false
branding:
color: orange
icon: activity
runs:
using: node16
main: dist/index.js
Action ID: marketplace/amirisback/android-floating-action-button
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-floating-action-button
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/lucernae/chart-releaser-action
Author: The Helm authors
Publisher: lucernae
Repository: github.com/lucernae/chart-releaser-action
Host a Helm charts repo on GitHub Pages
| Name | Required | Description |
|---|---|---|
version |
Optional | The chart-releaser version to use (default: v1.0.0) |
charts_dir |
Optional | The charts directory Default: charts |
charts_repo_url |
Required | The GitHub Pages URL to the charts repo (default: https://<owner>.github.io/<repo>) |
name: "Helm Chart Releaser"
description: "Host a Helm charts repo on GitHub Pages"
author: "The Helm authors"
branding:
color: blue
icon: anchor
inputs:
version:
description: "The chart-releaser version to use (default: v1.0.0)"
charts_dir:
description: The charts directory
default: charts
charts_repo_url:
description: "The GitHub Pages URL to the charts repo (default: https://<owner>.github.io/<repo>)"
required: true
runs:
using: "node12"
main: "main.js"
Action ID: marketplace/amirisback/automated-build-android-app-with-github-action
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/automated-build-android-app-with-github-action
Automated build android app bundle and apk with github action
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'automated-build-android-app-with-github-action'
description: 'Automated build android app bundle and apk with github action'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/crowdin/translate-readme
Author: Andrii Bodnar
Publisher: crowdin
Repository: github.com/crowdin/translate-readme
Automate translation of your README.md files via Crowdin
| Name | Required | Description |
|---|---|---|
file |
Optional | The Readme file name Default: README.md |
branch |
Optional | Specify branch name to create in Crowdin |
destination |
Optional | A directory where the localized readmes will be placed Default: ./ |
languages |
Optional | A list of languages to translate the Readme |
language_switcher |
Optional | Defines whether to add a language switcher to the Readme Default: false |
upload_sources |
Optional | Defines whether to upload the source file to Crowdin Default: true |
download_translations |
Optional | Defines whether to download translations from Crowdin Default: true |
name: 'Translate Readme'
description: 'Automate translation of your README.md files via Crowdin'
author: 'Andrii Bodnar'
branding:
icon: 'file-text'
color: 'green'
inputs:
file:
required: false
description: 'The Readme file name'
default: 'README.md'
branch:
required: false
description: 'Specify branch name to create in Crowdin'
destination:
required: false
description: 'A directory where the localized readmes will be placed'
default: './'
languages:
required: false
description: 'A list of languages to translate the Readme'
language_switcher:
required: false
description: 'Defines whether to add a language switcher to the Readme'
default: 'false'
upload_sources:
required: false
description: 'Defines whether to upload the source file to Crowdin'
default: 'true'
download_translations:
required: false
description: 'Defines whether to download translations from Crowdin'
default: 'true'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/aws-actions/amazon-ecr-login
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/amazon-ecr-login
Logs in the local Docker client to one or more Amazon ECR Private registries or an Amazon ECR Public registry
| Name | Required | Description |
|---|---|---|
http-proxy |
Optional | Proxy to use for the AWS SDK agent. |
mask-password |
Optional | Mask the docker password to prevent it being printed to action logs if debug logging is enabled. NOTE: This will prevent the Docker password output from being shared between separate jobs. Options: ['true', 'false'] Default: true |
registries |
Optional | A comma-delimited list of AWS account IDs that are associated with the ECR Private registries. If you do not specify a registry, the default ECR Private registry is assumed. If 'public' is given as input to 'registry-type', this input is ignored. |
registry-type |
Optional | Which ECR registry type to log into. Options: [private, public] Default: private |
skip-logout |
Optional | Whether to skip explicit logout of the registries during post-job cleanup. Exists for backward compatibility on self-hosted runners. Not recommended. Options: ['true', 'false'] Default: false |
| Name | Description |
|---|---|
registry |
The URI of the ECR Private or ECR Public registry. If logging into multiple registries on ECR Private, this output will not be set. |
name: 'Amazon ECR "Login" Action for GitHub Actions'
description: 'Logs in the local Docker client to one or more Amazon ECR Private registries or an Amazon ECR Public registry'
branding:
icon: 'cloud'
color: 'orange'
inputs:
http-proxy:
description: >-
Proxy to use for the AWS SDK agent.
required: false
mask-password:
description: >-
Mask the docker password to prevent it being printed to action logs if debug logging is enabled.
NOTE: This will prevent the Docker password output from being shared between separate jobs.
Options: ['true', 'false']
required: false
default: 'true'
registries:
description: >-
A comma-delimited list of AWS account IDs that are associated with the ECR Private registries.
If you do not specify a registry, the default ECR Private registry is assumed.
If 'public' is given as input to 'registry-type', this input is ignored.
required: false
registry-type:
description: >-
Which ECR registry type to log into.
Options: [private, public]
required: false
default: private
skip-logout:
description: >-
Whether to skip explicit logout of the registries during post-job cleanup.
Exists for backward compatibility on self-hosted runners.
Not recommended.
Options: ['true', 'false']
required: false
default: 'false'
outputs:
registry:
description: >-
The URI of the ECR Private or ECR Public registry.
If logging into multiple registries on ECR Private, this output will not be set.
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/cleanup/index.js'
Action ID: marketplace/andstor/backstopjs-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/backstopjs-action
GitHub Action for running BackstopJS
| Name | Required | Description |
|---|---|---|
command |
Optional | The BackstopJS command to run. Default: test |
config_file |
Optional | The path to a BackstopJS json configuration file. Default: backstop.json |
filter |
Optional | The path to the image file to be compared against the base image. |
| Name | Description |
|---|---|
backstopjs-dir |
The BackstopJS dir. |
name: 'BackstopJS Action'
description: 'GitHub Action for running BackstopJS'
author: 'André Storhaug'
branding:
icon: 'image'
color: 'purple'
inputs:
command:
description: 'The BackstopJS command to run.'
default: 'test'
required: false
config_file:
description: 'The path to a BackstopJS json configuration file.'
default: 'backstop.json'
required: false
filter:
description: 'The path to the image file to be compared against the base image.'
required: false
outputs:
backstopjs-dir:
description: 'The BackstopJS dir.'
runs:
using: 'node12'
main: 'src/index.js'
Action ID: marketplace/amirisback/consumable-code-pixabay-api
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/consumable-code-pixabay-api
Retrofit has been Handled, Consumable code for request Public API (Pixabay API)
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Consumable Code Pixabay API'
description: 'Retrofit has been Handled, Consumable code for request Public API (Pixabay API)'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/keyboard
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/keyboard
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Keyboard-Open-Source'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/gradle/actions
Author: Unknown
Publisher: gradle
Repository: github.com/gradle/actions
A collection of actions for building Gradle projects, as well as generating a dependency graph via Dependency Submission.
name: Build with Gradle
description: A collection of actions for building Gradle projects, as well as generating a dependency graph via Dependency Submission.
runs:
using: "composite"
steps:
- run: |
echo "::error::The path 'gradle/actions' is not a valid action. Please use 'gradle/actions/setup-gradle' or 'gradle/actions/dependency-submission'."
exit 1
shell: bash
branding:
icon: 'box'
color: 'gray-dark'
Action ID: marketplace/srt32/create-pull-request
Author: Peter Evans
Publisher: srt32
Repository: github.com/srt32/create-pull-request
Creates a pull request for changes to your repository in the actions workspace
name: 'Create Pull Request'
author: 'Peter Evans'
description: 'Creates a pull request for changes to your repository in the actions workspace'
runs:
using: 'docker'
image: 'docker://peterevans/create-pull-request:1.5.2'
branding:
icon: 'git-pull-request'
color: 'gray-dark'
Action ID: marketplace/dkamm/pr-quiz
Author: dkamm
Publisher: dkamm
Repository: github.com/dkamm/pr-quiz
A GitHub Action that uses AI to generate a quiz based on a pull request
| Name | Required | Description |
|---|---|---|
github-token |
Required | GitHub token for API access |
openai-api-key |
Required | OpenAI API key for API access |
ngrok-authtoken |
Required | The ngrok authtoken to use for the server hosting the quiz. |
model |
Optional | The model to use for generating the quiz. It must be a model that supports structured outputs. Defaults to 'o4-mini'. Default: o4-mini |
lines-changed-threshold |
Optional | The minimum number of lines changed required to create a quiz. This is to prevent quizzes from being created for small pull requests. Default: 100 |
time-limit-minutes |
Optional | The time limit to complete the quiz in minutes. This prevents the action from running indefinitely. Default: 10 |
max-attempts |
Optional | The maximum number of attempts to pass the quiz. A value of 0 means unlimited attempts. Default: 3 |
exclude-file-patterns |
Optional | A list of file patterns to exclude from the quiz as a JSON-ified string. Default: ["**/*-lock.json", "**/*-lock.yaml", "**/*.lock", "**/*.map", "**/*.pb.*", "**/*_pb2.py", "**/*.generated.*", "**/*.auto.*"] |
system-prompt |
Optional | Optional override for the system prompt. Be sure the specify that multiple choice questions must be returned. |
name: PR Quiz
description:
A GitHub Action that uses AI to generate a quiz based on a pull request
author: dkamm
branding:
icon: cpu
color: blue
inputs:
github-token:
description: GitHub token for API access
required: true
openai-api-key:
description: OpenAI API key for API access
required: true
ngrok-authtoken:
description: The ngrok authtoken to use for the server hosting the quiz.
required: true
model:
description:
The model to use for generating the quiz. It must be a model that
supports structured outputs. Defaults to 'o4-mini'.
required: false
default: 'o4-mini'
lines-changed-threshold:
description:
The minimum number of lines changed required to create a quiz. This is to
prevent quizzes from being created for small pull requests.
required: false
default: '100'
time-limit-minutes:
description:
The time limit to complete the quiz in minutes. This prevents the action
from running indefinitely.
required: false
default: '10'
max-attempts:
description:
The maximum number of attempts to pass the quiz. A value of 0 means
unlimited attempts.
required: false
default: '3'
exclude-file-patterns:
description:
A list of file patterns to exclude from the quiz as a JSON-ified string.
required: false
default:
'["**/*-lock.json", "**/*-lock.yaml", "**/*.lock", "**/*.map",
"**/*.pb.*", "**/*_pb2.py", "**/*.generated.*", "**/*.auto.*"]'
system-prompt:
description:
Optional override for the system prompt. Be sure the specify that multiple
choice questions must be returned.
required: false
runs:
using: composite
steps:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install external dependencies
run: npm install @ngrok/ngrok ejs express
shell: bash
working-directory: ${{ github.action_path }}
- name: Run action
run: npx node dist/index.cjs
shell: bash
working-directory: ${{ github.action_path }}
env:
INPUT_GITHUB_TOKEN: ${{ inputs.github-token }}
INPUT_OPENAI_API_KEY: ${{ inputs.openai-api-key }}
INPUT_NGROK_AUTHTOKEN: ${{ inputs.ngrok-authtoken }}
INPUT_LINES_CHANGED_THRESHOLD: ${{ inputs.lines-changed-threshold }}
INPUT_TIME_LIMIT_MINUTES: ${{ inputs.time-limit-minutes }}
INPUT_MAX_ATTEMPTS: ${{ inputs.max-attempts }}
INPUT_EXCLUDE_FILE_PATTERNS: ${{ inputs.exclude-file-patterns }}
INPUT_MODEL: ${{ inputs.model }}
INPUT_SYSTEM_PROMPT: ${{ inputs.system-prompt }}
Action ID: marketplace/sobolevn/telegram-notifier
Author: Sehat1137
Publisher: sobolevn
Repository: github.com/sobolevn/telegram-notifier
Send Telegram notifications for new GitHub issues
| Name | Required | Description |
|---|---|---|
tg-bot-token |
Required | Telegram Bot Token |
tg-chat-id |
Required | Telegram Chat ID |
tg-message-thread-id |
Optional | Telegram Message Thread ID |
github-token |
Optional | GitHub Token for API access |
base-url |
Optional | Base URL for sulguk Default: https://github.com |
python-version |
Optional | Python version for action Default: 3.10 |
attempt-count |
Optional | Telegram API attempt count Default: 10 |
html-template |
Optional | HTML template for Telegram message |
md-template |
Optional | Markdown template for Telegram message |
name: "Telegram Issue Notifier"
description: "Send Telegram notifications for new GitHub issues"
author: "Sehat1137"
inputs:
tg-bot-token:
description: "Telegram Bot Token"
required: true
tg-chat-id:
description: "Telegram Chat ID"
required: true
tg-message-thread-id:
description: "Telegram Message Thread ID"
required: false
github-token:
description: "GitHub Token for API access"
required: false
base-url:
description: "Base URL for sulguk"
required: false
default: "https://github.com"
python-version:
description: "Python version for action"
required: false
default: "3.10"
attempt-count:
description: "Telegram API attempt count"
required: false
default: "10"
html-template:
description: "HTML template for Telegram message"
required: false
md-template:
description: "Markdown template for Telegram message"
required: false
runs:
using: "composite"
steps:
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
shell: bash
run: |
pip install -r $GITHUB_ACTION_PATH/requirements.txt
- name: Send Telegram notification
shell: bash
env:
TELEGRAM_BOT_TOKEN: ${{ inputs.tg-bot-token }}
TELEGRAM_CHAT_ID: ${{ inputs.tg-chat-id }}
GITHUB_TOKEN: ${{ inputs.github-token }}
EVENT_URL: ${{ github.event.issue.url || github.event.pull_request.url }}
BASE_URL: ${{ inputs.base-url }}
ATTEMPT_COUNT: ${{ inputs.attempt-count }}
HTML_TEMPLATE: ${{ inputs.html-template }}
MD_TEMPLATE: ${{ inputs.md-template }}
TELEGRAM_MESSAGE_THREAD_ID: ${{ inputs.tg-message-thread-id }}
run: |
python3 $GITHUB_ACTION_PATH/main.py
branding:
icon: "message-circle"
color: "blue"
Action ID: marketplace/raeperd/setup-go
Author: GitHub
Publisher: raeperd
Repository: github.com/raeperd/setup-go
Setup a Go environment and add it to the PATH
| Name | Required | Description |
|---|---|---|
go-version |
Optional | The Go version to download (if necessary) and use. Supports semver spec and ranges. Be sure to enclose this option in single quotation marks. |
go-version-file |
Optional | Path to the go.mod or go.work file. |
check-latest |
Optional | Set this option to true if you want the action to always check for the latest available version that satisfies the version spec |
token |
Optional | Used to pull Go distributions from go-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
cache |
Optional | Used to specify whether caching is needed. Set to true, if you'd like to enable caching. Default: True |
cache-dependency-path |
Optional | Used to specify the path to a dependency file - go.sum |
architecture |
Optional | Target architecture for Go to use. Examples: x86, x64. Will use system architecture by default. |
| Name | Description |
|---|---|
go-version |
The installed Go version. Useful when given a version range as input. |
cache-hit |
A boolean value to indicate if a cache was hit |
name: 'Setup Go environment'
description: 'Setup a Go environment and add it to the PATH'
author: 'GitHub'
inputs:
go-version:
description: 'The Go version to download (if necessary) and use. Supports semver spec and ranges. Be sure to enclose this option in single quotation marks.'
go-version-file:
description: 'Path to the go.mod or go.work file.'
check-latest:
description: 'Set this option to true if you want the action to always check for the latest available version that satisfies the version spec'
default: false
token:
description: Used to pull Go distributions from go-versions. Since there's a default, this is typically not supplied by the user. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting.
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
cache:
description: Used to specify whether caching is needed. Set to true, if you'd like to enable caching.
default: true
cache-dependency-path:
description: 'Used to specify the path to a dependency file - go.sum'
architecture:
description: 'Target architecture for Go to use. Examples: x86, x64. Will use system architecture by default.'
outputs:
go-version:
description: 'The installed Go version. Useful when given a version range as input.'
cache-hit:
description: 'A boolean value to indicate if a cache was hit'
runs:
using: 'node20'
main: 'dist/setup/index.js'
post: 'dist/cache-save/index.js'
post-if: success()
Action ID: marketplace/docker/setup-buildx-action
Author: docker
Publisher: docker
Repository: github.com/docker/setup-buildx-action
Set up Docker Buildx
| Name | Required | Description |
|---|---|---|
version |
Optional | Buildx version. (eg. v0.3.0) |
driver |
Optional | Sets the builder driver to be used Default: docker-container |
driver-opts |
Optional | List of additional driver-specific options. (eg. image=moby/buildkit:master) |
buildkitd-flags |
Optional | BuildKit daemon flags |
buildkitd-config |
Optional | BuildKit daemon config file |
buildkitd-config-inline |
Optional | Inline BuildKit daemon config |
install |
Optional | Sets up docker build command as an alias to docker buildx build Default: false |
use |
Optional | Switch to this builder instance Default: true |
name |
Optional | Name of the builder. If not specified, one will be generated or if it already exists, it will be used instead of creating a new one. |
endpoint |
Optional | Optional address for docker socket or context from `docker context ls` |
platforms |
Optional | Fixed platforms for current node. If not empty, values take priority over the detected ones |
append |
Optional | Append additional nodes to the builder |
keep-state |
Optional | Keep BuildKit state on cleanup. This is only useful on persistent self-hosted runners. Default: false |
cache-binary |
Optional | Cache buildx binary to GitHub Actions cache backend Default: true |
cleanup |
Optional | Cleanup temp files and remove builder at the end of a job Default: true |
config |
Optional | BuildKit daemon config file |
config-inline |
Optional | Inline BuildKit daemon config |
| Name | Description |
|---|---|
name |
Builder name |
driver |
Builder driver |
platforms |
Builder node platforms (preferred or available) |
nodes |
Builder nodes metadata |
endpoint |
Builder node endpoint (deprecated, use nodes output instead) |
status |
Builder node status (deprecated, use nodes output instead) |
flags |
Builder node flags (deprecated, use nodes output instead) |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Docker Setup Buildx'
description: 'Set up Docker Buildx'
author: 'docker'
branding:
icon: 'anchor'
color: 'blue'
inputs:
version:
description: 'Buildx version. (eg. v0.3.0)'
required: false
driver:
description: 'Sets the builder driver to be used'
default: 'docker-container'
required: false
driver-opts:
description: 'List of additional driver-specific options. (eg. image=moby/buildkit:master)'
required: false
buildkitd-flags:
description: 'BuildKit daemon flags'
required: false
buildkitd-config:
description: 'BuildKit daemon config file'
required: false
buildkitd-config-inline:
description: 'Inline BuildKit daemon config'
required: false
install:
description: 'Sets up docker build command as an alias to docker buildx build'
default: 'false'
required: false
use:
description: 'Switch to this builder instance'
default: 'true'
required: false
name:
description: 'Name of the builder. If not specified, one will be generated or if it already exists, it will be used instead of creating a new one.'
required: false
endpoint:
description: 'Optional address for docker socket or context from `docker context ls`'
required: false
platforms:
description: 'Fixed platforms for current node. If not empty, values take priority over the detected ones'
required: false
append:
description: 'Append additional nodes to the builder'
required: false
keep-state:
description: 'Keep BuildKit state on cleanup. This is only useful on persistent self-hosted runners.'
default: 'false'
required: false
cache-binary:
description: 'Cache buildx binary to GitHub Actions cache backend'
default: 'true'
required: false
cleanup:
description: 'Cleanup temp files and remove builder at the end of a job'
default: 'true'
required: false
# TODO: remove deprecated config and config-inline inputs
config:
description: 'BuildKit daemon config file'
deprecationMessage: 'Use buildkitd-config instead'
required: false
config-inline:
description: 'Inline BuildKit daemon config'
deprecationMessage: 'Use buildkitd-config-inline instead'
required: false
outputs:
name:
description: 'Builder name'
driver:
description: 'Builder driver'
platforms:
description: 'Builder node platforms (preferred or available)'
nodes:
description: 'Builder nodes metadata'
endpoint:
description: 'Builder node endpoint (deprecated, use nodes output instead)'
status:
description: 'Builder node status (deprecated, use nodes output instead)'
flags:
description: 'Builder node flags (deprecated, use nodes output instead)'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/azure/k8s-set-context
Author: Unknown
Publisher: azure
Repository: github.com/azure/k8s-set-context
Set the context of a target Kubernetes cluster and export the kubeconfig which is used by subsequent actions
| Name | Required | Description |
|---|---|---|
cluster-type |
Required | Acceptable values: generic or arc Default: generic |
method |
Required | Acceptable values: kubeconfig or service-account or service-principal Default: kubeconfig |
kubeconfig |
Optional | Contents of kubeconfig file |
kubeconfig-encoding |
Optional | Encoding of the kubeconfig input. Accepts "plaintext" (default) or "base64". Default: plaintext |
context |
Optional | If your kubeconfig has multiple contexts, use this field to use a specific context, otherwise the default one would be chosen |
k8s-url |
Optional | Cluster Url |
k8s-secret |
Optional | Service account secret (run kubectl get serviceaccounts <service-account-name> -o yaml and copy the service-account-secret-name) |
token |
Optional | Token extracted from the secret of service account (should be base 64 decoded) |
resource-group |
Optional | Azure resource group name |
cluster-name |
Optional | Azure connected cluster name |
name: 'Kubernetes Set Context'
description: 'Set the context of a target Kubernetes cluster and export the kubeconfig which is used by subsequent actions'
inputs:
# Please ensure you have used azure/login in the workflow before this action
cluster-type:
description: 'Acceptable values: generic or arc'
required: true
default: 'generic'
method:
description: 'Acceptable values: kubeconfig or service-account or service-principal'
required: true
default: 'kubeconfig'
kubeconfig:
description: 'Contents of kubeconfig file'
required: false
kubeconfig-encoding:
description: 'Encoding of the kubeconfig input. Accepts "plaintext" (default) or "base64".'
required: false
default: 'plaintext'
context:
description: 'If your kubeconfig has multiple contexts, use this field to use a specific context, otherwise the default one would be chosen'
required: false
k8s-url:
description: 'Cluster Url'
required: false
k8s-secret:
description: 'Service account secret (run kubectl get serviceaccounts <service-account-name> -o yaml and copy the service-account-secret-name)'
required: false
token:
description: 'Token extracted from the secret of service account (should be base 64 decoded)'
required: false
resource-group:
description: 'Azure resource group name'
required: false
cluster-name:
description: 'Azure connected cluster name'
required: false
branding:
color: 'blue'
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/tgymnich/publish-github-action
Author: tgymnich
Publisher: tgymnich
Repository: github.com/tgymnich/publish-github-action
Publish your GitHub Action
| Name | Required | Description |
|---|---|---|
github_token |
Required | Token for the github API |
name: 'Publish GitHub Action'
description: 'Publish your GitHub Action'
author: 'tgymnich'
branding:
icon: 'truck'
color: 'blue'
inputs:
github_token:
description: 'Token for the github API'
required: true
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/axel-op/hello-world-dart-github-action
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/hello-world-dart-github-action
A template to demonstrate how to build a Dart action. Greet someone and record the time.
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Who to greet Default: World |
| Name | Description |
|---|---|
time |
The time we we greeted you |
name: "Hello World Dart Action"
description: "A template to demonstrate how to build a Dart action. Greet someone and record the time."
branding:
color: blue
icon: feather
inputs:
who-to-greet: # id of input
description: "Who to greet"
required: true
default: "World"
outputs:
time: # id of output
description: "The time we we greeted you"
runs:
using: "node12"
pre: "scripts/setup.js" # Delete this line to not run the setup script
main: "scripts/index.js"
post: "scripts/cleanup.js" # Delete this line to not run the cleanup script
Action ID: marketplace/MarketingPipeline/Image-Optimizer-Action
Author: github.com/MarketingPipeline
Publisher: MarketingPipeline
Repository: github.com/MarketingPipeline/Image-Optimizer-Action
A Github action to optimize images in a repo
| Name | Required | Description |
|---|---|---|
filename |
Optional | Folder or image path to optimize |
recursion |
Optional | Optimize all image files in current working directory and all of its subdirectories |
name: 'Image-Optimizer-Action'
description: 'A Github action to optimize images in a repo'
author: 'github.com/MarketingPipeline'
inputs:
filename:
description: 'Folder or image path to optimize'
default: ''
required: false
recursion:
description: 'Optimize all image files in current working directory and all of its subdirectories'
default: ''
required: false
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.filename }}
- ${{ inputs.recursion }}
branding:
icon: 'activity'
color: 'white'
Action ID: marketplace/vsoch/uptodate
Author: Vanessa Sochat
Publisher: vsoch
Repository: github.com/vsoch/uptodate
Check that repository assets are up to date
| Name | Required | Description |
|---|---|---|
root |
Optional | Root path to provide to command. Can be a Dockerfile or directory. |
parser |
Required | Parser to run, one of dockerfile, or dockerhierarchy |
dry_run |
Optional | Do a dry run (don't write, but show changes) one of true or false, defaults to false |
changes |
Optional | Only consider relevant changed files for the current git commit |
flags |
Optional | Extra flags for the parser (e.g., --no-build-args or --no-empty-build-args for dockerfilelist) |
| Name | Description |
|---|---|
dockerfile_matrix |
A matrix of Dockerfile changes with name and filename set to the Dockerfile name |
dockerhierarchy_matrix |
A matrix of new Dockerfiles and the corresponding tag (Name) |
dockerfilelist_matrix |
A matrix of Dockerfiles listed with dockerfilelist |
dockerbuild_matrix |
A matrix of Docker builds |
name: "uptodate-action"
author: "Vanessa Sochat"
description: "Check that repository assets are up to date"
inputs:
root:
description: "Root path to provide to command. Can be a Dockerfile or directory."
required: false
default: ""
parser:
description: "Parser to run, one of dockerfile, or dockerhierarchy"
required: true
dry_run:
description: "Do a dry run (don't write, but show changes) one of true or false, defaults to false"
required: false
default: false
changes:
description: "Only consider relevant changed files for the current git commit"
required: false
default: false
flags:
description: "Extra flags for the parser (e.g., --no-build-args or --no-empty-build-args for dockerfilelist)"
required: false
default: ""
#runs:
# using: "docker"
# image: "Dockerfile"
runs:
using: 'docker'
image: 'docker://ghcr.io/vsoch/uptodate:latest'
branding:
icon: "activity"
color: "blue"
outputs:
dockerfile_matrix:
description: A matrix of Dockerfile changes with name and filename set to the Dockerfile name
dockerhierarchy_matrix:
description: A matrix of new Dockerfiles and the corresponding tag (Name)
dockerfilelist_matrix:
description: A matrix of Dockerfiles listed with dockerfilelist
dockerbuild_matrix:
description: A matrix of Docker builds
Action ID: marketplace/github/deploy-nodejs
Author: GitHub
Publisher: github
Repository: github.com/github/deploy-nodejs
Install NodeJS on your GitHub Action runner.
name: 'Deploy-NodeJS'
author: 'GitHub'
description: 'Install NodeJS on your GitHub Action runner.'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'check-square'
color: 'blue'
Action ID: marketplace/dflook/tofu-validate
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-validate
Validate an OpenTofu configuration directory
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu module to validate Default: . |
workspace |
Optional | OpenTofu workspace to use for the `terraform.workspace` value while validating. Note that for remote operations in a cloud backend, this is always `default`.
Also used for discovering the OpenTofu version to use, if not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
Default: default |
backend_config |
Optional | List of OpenTofu backend config values, one per line. This is used for discovering the OpenTofu version to use, if not otherwise specified. See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace This is used for discovering the OpenTofu version to use, if not otherwise specified. See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details. |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure` because the validation failed, this will be set to 'validate-failed'. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a step when the validate fails. |
name: tofu-validate
description: Validate an OpenTofu configuration directory
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu module to validate
required: false
default: "."
workspace:
description: |
OpenTofu workspace to use for the `terraform.workspace` value while validating. Note that for remote operations in a cloud backend, this is always `default`.
Also used for discovering the OpenTofu version to use, if not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: "default"
backend_config:
description: |
List of OpenTofu backend config values, one per line.
This is used for discovering the OpenTofu version to use, if not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
This is used for discovering the OpenTofu version to use, if not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: ""
outputs:
failure-reason:
description: |
When the job outcome is `failure` because the validation failed, this will be set to 'validate-failed'.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a step when the validate fails.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/validate.sh
branding:
icon: globe
color: purple
Action ID: marketplace/dflook/tofu-fmt
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-fmt
Rewrite OpenTofu files into canonical format
| Name | Required | Description |
|---|---|---|
path |
Optional | The path containing OpenTofu files to format. Default: . |
workspace |
Optional | OpenTofu workspace to inspect when discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
Default: default |
variables |
Optional | Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified. See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified. See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details. Paths should be relative to the GitHub Actions workspace |
name: tofu-fmt
description: Rewrite OpenTofu files into canonical format
author: Daniel Flook
inputs:
path:
description: The path containing OpenTofu files to format.
required: false
default: "."
workspace:
description: |
OpenTofu workspace to inspect when discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: "default"
variables:
description: |
Variables to set when initializing OpenTofu. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: |
List of OpenTofu backend config values, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line. This is used for discovering the OpenTofu version to use, if the version is not otherwise specified.
See [dflook/tofu-version](https://github.com/dflook/terraform-github-actions/tree/main/tofu-version#tofu-version-action) for details.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/fmt.sh
branding:
icon: globe
color: purple
Action ID: marketplace/rhysd/get-secretmanager-secrets
Author: Google LLC
Publisher: rhysd
Repository: github.com/rhysd/get-secretmanager-secrets
Get secrets from Google Secret Manager and make their results available as output variables.
| Name | Required | Description |
|---|---|---|
secrets |
Required | Comma-separated or newline-separated list of secrets to fetch. Secrets must be of the format <project>/<secret> or <project>/<secret>/<version>. |
min_mask_length |
Optional | Minimum line length for a secret to be masked. Extremely short secrets
(e.g. "{" or "a") can make GitHub Actions log output unreadable. This is
especially important for multi-line secrets, since each line of the secret
is masked independently. Default: 4 |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Get Secret Manager secrets'
author: 'Google LLC'
description: |-
Get secrets from Google Secret Manager and make their results available as
output variables.
inputs:
secrets:
description: |-
Comma-separated or newline-separated list of secrets to fetch. Secrets
must be of the format <project>/<secret> or <project>/<secret>/<version>.
required: true
min_mask_length:
description: |-
Minimum line length for a secret to be masked. Extremely short secrets
(e.g. "{" or "a") can make GitHub Actions log output unreadable. This is
especially important for multi-line secrets, since each line of the secret
is masked independently.
required: false
default: '4'
branding:
icon: 'lock'
color: 'blue'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/dflook/terraform-version
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-version
Prints Terraform and providers versions
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform root module directory. Default: . |
workspace |
Optional | The workspace to determine the Terraform version for. Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. This will be used to fetch the Terraform version set in the cloud workspace if using the `remote` backend. For other backend types, this is used to fetch the version that most recently wrote to the Terraform state. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace This will be used to fetch the Terraform version set in the cloud workspace if using the `remote` backend. For other backend types, this is used to fetch the version that most recently wrote to the Terraform state. |
| Name | Description |
|---|---|
terraform |
The Hashicorp Terraform or OpenTofu version that is used by the configuration. |
tofu |
If the action chose a version of OpenTofu, this will be set to the version that is used by the configuration. |
name: terraform-version
description: Prints Terraform and providers versions
author: Daniel Flook
inputs:
path:
description: The path to the Terraform root module directory.
required: false
default: "."
workspace:
description: The workspace to determine the Terraform version for.
required: false
default: "default"
backend_config:
description: |
List of Terraform backend config values, one per line.
This will be used to fetch the Terraform version set in the cloud workspace if using the `remote` backend.
For other backend types, this is used to fetch the version that most recently wrote to the Terraform state.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
This will be used to fetch the Terraform version set in the cloud workspace if using the `remote` backend.
For other backend types, this is used to fetch the version that most recently wrote to the Terraform state.
required: false
default: ""
outputs:
terraform:
description: The Hashicorp Terraform or OpenTofu version that is used by the configuration.
tofu:
description: If the action chose a version of OpenTofu, this will be set to the version that is used by the configuration.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/version.sh
branding:
icon: globe
color: purple
Action ID: marketplace/AndreasAugustin/actions-template-sync
Author: AndreasAugustin
Publisher: AndreasAugustin
Repository: github.com/AndreasAugustin/actions-template-sync
Synchronises changes of the template repository
| Name | Required | Description |
|---|---|---|
source_gh_token |
Optional | GitHub Token for the repo to be synced from. Can be passed in using $\{{ secrets.GITHUB_TOKEN }} Default: ${{ github.token }} |
target_gh_token |
Optional | GitHub Token for the repo to be synced to. Can be passed in using $\{{ secrets.GITHUB_TOKEN }} Default: ${{ github.token }} |
github_token |
Optional | GitHub Token for the repo to be synced from. Can be passed in using $\{{ secrets.GITHUB_TOKEN }} |
source_repo_path |
Required | Repository path of the template |
upstream_branch |
Optional | The target branch |
source_repo_ssh_private_key |
Optional | [optional] private ssh key for the source repository. E.q. useful if using a private template repository. |
pr_branch_name_prefix |
Optional | [optional] the prefix of branches created by this action Default: chore/template_sync |
pr_title |
Optional | [optional] the title of PRs opened by this action Default: upstream merge template repository |
pr_body |
Optional | [optional] the body of PRs opened by this action |
pr_labels |
Optional | [optional] comma separated list of pull request labels Default: template_sync |
pr_reviewers |
Optional | [optional] comma separated list of pull request reviewers |
pr_commit_msg |
Optional | [optional] the commit message of the template merge Default: chore(template): merge template changes :up: |
hostname |
Optional | [optional] the hostname of the GitHub repository Default: github.com |
is_dry_run |
Optional | [optional] set to true if you do not want to push the changes and not want to create a PR |
is_allow_hooks |
Optional | [optional] set to true if you want to allow hooks. Use this functionality with caution! Default: false |
hooks |
Optional | [optional] define the hooks as yaml string input |
is_force_push_pr |
Optional | [optional] set to true if you want to force push and pr update Default: false |
is_pr_cleanup |
Optional | [optional] set to true if you want to cleanup older PRs targeting the same branch. Default: false |
is_keep_branch_on_pr_cleanup |
Optional | [optional] set to true if you want to keep the branch when pr is cleanup Default: false |
is_not_source_github |
Optional | [optional] set to true if the source repository is not a github related repository. Useful e.q. if the source is GitLab Default: false |
is_force_deletion |
Optional | [optional] set to true if you want to force delete files which are deleted within the source repository even if they contain changes Default: false |
is_git_lfs |
Optional | [optional] set to true if you want to enable git lfs Default: false |
git_user_name |
Optional | [optional] set the committer git user.name for the merge commit |
git_user_email |
Optional | [optional] set the committer git user.email for the merge commit |
git_remote_pull_params |
Optional | [optional] set the pull parameters for the remote repository |
gpg_private_key |
Optional | [optional] set the gpg private key if you want to sign your commits |
gpg_passphrase |
Optional | [optional] set if your private gpg key has a password |
steps |
Optional | [optional] set the steps to execute within the action |
template_sync_ignore_file_path |
Optional | [optional] set the path to the ignore file Default: .templatesyncignore |
is_with_tags |
Optional | [optional] set to true if tags should be synced Default: false |
| Name | Description |
|---|---|
pr_branch |
The name of the PR branch |
template_git_hash |
The git hash of the template source repository |
name: "actions-template-sync"
description: "Synchronises changes of the template repository"
author: "AndreasAugustin"
branding:
icon: cloud
color: green
inputs:
source_gh_token:
description: 'GitHub Token for the repo to be synced from. Can be passed in using $\{{ secrets.GITHUB_TOKEN }}'
required: false
default: ${{ github.token }}
target_gh_token:
description: 'GitHub Token for the repo to be synced to. Can be passed in using $\{{ secrets.GITHUB_TOKEN }}'
required: false
default: ${{ github.token }}
github_token:
deprecationMessage: 'please use source_gh_token instead to have a declarative name'
description: 'GitHub Token for the repo to be synced from. Can be passed in using $\{{ secrets.GITHUB_TOKEN }}'
required: false
source_repo_path:
description: "Repository path of the template"
required: true
upstream_branch:
description: "The target branch"
source_repo_ssh_private_key:
description: "[optional] private ssh key for the source repository. E.q. useful if using a private template repository."
pr_branch_name_prefix:
description: "[optional] the prefix of branches created by this action"
default: "chore/template_sync"
pr_title:
description: "[optional] the title of PRs opened by this action"
default: "upstream merge template repository"
pr_body:
description: "[optional] the body of PRs opened by this action"
pr_labels:
description: "[optional] comma separated list of pull request labels"
default: "template_sync"
pr_reviewers:
description: "[optional] comma separated list of pull request reviewers"
pr_commit_msg:
description: "[optional] the commit message of the template merge"
default: "chore(template): merge template changes :up:"
hostname:
description: "[optional] the hostname of the GitHub repository"
default: "github.com"
is_dry_run:
description: "[optional] set to true if you do not want to push the changes and not want to create a PR"
is_allow_hooks:
description: "[optional] set to true if you want to allow hooks. Use this functionality with caution!"
default: "false"
hooks:
description: "[optional] define the hooks as yaml string input"
is_force_push_pr:
description: "[optional] set to true if you want to force push and pr update"
default: "false"
is_pr_cleanup:
description: "[optional] set to true if you want to cleanup older PRs targeting the same branch."
default: "false"
is_keep_branch_on_pr_cleanup:
description: "[optional] set to true if you want to keep the branch when pr is cleanup"
default: "false"
is_not_source_github:
description: "[optional] set to true if the source repository is not a github related repository. Useful e.q. if the source is GitLab"
default: "false"
is_force_deletion:
description: "[optional] set to true if you want to force delete files which are deleted within the source repository even if they contain changes"
default: "false"
is_git_lfs:
description: "[optional] set to true if you want to enable git lfs"
default: "false"
git_user_name:
description: "[optional] set the committer git user.name for the merge commit"
git_user_email:
description: "[optional] set the committer git user.email for the merge commit"
git_remote_pull_params:
description: "[optional] set the pull parameters for the remote repository"
gpg_private_key:
description: "[optional] set the gpg private key if you want to sign your commits"
gpg_passphrase:
description: "[optional] set if your private gpg key has a password"
steps:
description: "[optional] set the steps to execute within the action"
template_sync_ignore_file_path:
description: "[optional] set the path to the ignore file"
default: ".templatesyncignore"
is_with_tags:
description: "[optional] set to true if tags should be synced"
default: "false"
outputs:
pr_branch:
description: "The name of the PR branch"
value: ${{ steps.sync.outputs.pr_branch }}
template_git_hash:
description: "The git hash of the template source repository"
value: ${{ steps.sync.outputs.template_git_hash }}
runs:
using: "composite"
# image: "src/Dockerfile"
steps:
- name: github sync
run: ${{github.action_path}}/src/entrypoint.sh
# working-directory: src/
shell: bash
id: sync
env:
SOURCE_GH_TOKEN: ${{ inputs.source_gh_token}}
TARGET_GH_TOKEN: ${{ inputs.target_gh_token }}
# GITHUB_TOKEN is deprecated and will be removed soon
GITHUB_TOKEN: ${{ inputs.github_token }}
#
SOURCE_REPO_PATH: ${{ inputs.source_repo_path }}
UPSTREAM_BRANCH: ${{ inputs.upstream_branch }}
SSH_PRIVATE_KEY_SRC: ${{ inputs.source_repo_ssh_private_key }}
PR_BRANCH_NAME_PREFIX: ${{ inputs.pr_branch_name_prefix }}
PR_TITLE: ${{ inputs.pr_title }}
PR_BODY: ${{ inputs.pr_body }}
PR_LABELS: ${{ inputs.pr_labels }}
PR_REVIEWERS: ${{ inputs.pr_reviewers }}
PR_COMMIT_MSG: ${{ inputs.pr_commit_msg }}
HOSTNAME: ${{ inputs.hostname }}
IS_DRY_RUN: ${{ inputs.is_dry_run }}
IS_ALLOW_HOOKS: ${{ inputs.is_allow_hooks }}
HOOKS: ${{ inputs.hooks }}
IS_FORCE_PUSH_PR: ${{ inputs.is_force_push_pr }}
IS_GIT_LFS: ${{ inputs.is_git_lfs }}
IS_PR_CLEANUP: ${{ inputs.is_pr_cleanup}}
IS_KEEP_BRANCH_ON_PR_CLEANUP: ${{ inputs.is_keep_branch_on_pr_cleanup }}
IS_NOT_SOURCE_GITHUB: ${{ inputs.is_not_source_github }}
IS_FORCE_DELETION: ${{ inputs.is_force_deletion }}
GIT_USER_NAME: ${{ inputs.git_user_name }}
GIT_USER_EMAIL: ${{ inputs.git_user_email }}
GIT_REMOTE_PULL_PARAMS: ${{ inputs.git_remote_pull_params }}
GPG_PRIVATE_KEY: ${{ inputs.gpg_private_key }}
GPG_PASSPHRASE: ${{ inputs.gpg_passphrase }}
STEPS: ${{ inputs.steps }}
TEMPLATE_SYNC_IGNORE_FILE_PATH: ${{ inputs.template_sync_ignore_file_path }}
IS_WITH_TAGS: ${{ inputs.is_with_tags }}
Action ID: marketplace/wei/github-sync
Author: Wei He <github@weispot.com>
Publisher: wei
Repository: github.com/wei/github-sync
⤵️ Sync current repository with remote
| Name | Required | Description |
|---|---|---|
source_repo |
Required | GitHub public repo slug or full https clone url (with access_token if needed) |
source_branch |
Required | Branch name to sync from |
destination_branch |
Required | Branch name to sync to in this repo |
github_token |
Required | GitHub token secret |
sync_tags |
Optional | Should tags also be synced |
name: GitHub Repo Sync
author: Wei He <github@weispot.com>
description: ⤵️ Sync current repository with remote
branding:
icon: 'git-branch'
color: 'gray-dark'
inputs:
source_repo:
description: GitHub public repo slug or full https clone url (with access_token if needed)
required: true
source_branch:
description: Branch name to sync from
required: true
destination_branch:
description: Branch name to sync to in this repo
required: true
github_token:
description: GitHub token secret
required: true
sync_tags:
description: Should tags also be synced
required: false
runs:
using: 'docker'
image: docker://ghcr.io/repo-sync/github-sync:v2.3.0
env:
GITHUB_TOKEN: ${{ inputs.github_token }}
SYNC_TAGS: ${{ inputs.sync_tags }}
args:
- ${{ inputs.source_repo }}
- ${{ inputs.source_branch }}:${{ inputs.destination_branch }}
Action ID: marketplace/sobolevn/sourcehut_issue_mirror
Author: Unknown
Publisher: sobolevn
Repository: github.com/sobolevn/sourcehut_issue_mirror
Create a sr.ht issue
| Name | Required | Description |
|---|---|---|
title |
Required | Title of the issue Default: Title not provided |
body |
Required | Body of the issue Default: No issue body |
submitter |
Required | Username of issue submitter |
tracker-owner |
Required | Account name of the trakcer owner. *Must be proceeded with "~"* |
tracker-name |
Required | Name of the tracker |
oauth-token |
Required | OAuth Token for sr.ht |
name: 'SourceHut Issue Maker'
description: 'Create a sr.ht issue'
inputs:
title:
description: 'Title of the issue'
required: true
default: 'Title not provided'
body:
description: 'Body of the issue'
required: true
default: 'No issue body'
submitter:
description: 'Username of issue submitter'
required: true
tracker-owner:
description: 'Account name of the trakcer owner. *Must be proceeded with "~"*'
required: true
tracker-name:
description: 'Name of the tracker'
required: true
oauth-token:
description: 'OAuth Token for sr.ht'
required: true
runs:
using: 'node12'
main: 'index.js'
Action ID: marketplace/azure/mysql
Author: Unknown
Publisher: azure
Repository: github.com/azure/mysql
Deploy to Azure MySQL database using SQL script files
| Name | Required | Description |
|---|---|---|
server-name |
Required | Server name of Azure DB for Mysql. Example: fabrikam.mysql.database.azure.com. When you connect using Mysql Workbench, this is the same value that is used for Hostname in Parameters |
connection-string |
Optional | The connection string, including authentication information, for the Azure MySQL Server. (deprecated) |
username |
Optional | Azure MySQL Server username for login |
password |
Optional | Azure MySQL Server password for login |
database |
Optional | Azure MySQL Server database (optional) to connect to. No database will be used automatically. |
sql-file |
Required | Path to SQL script file to deploy |
arguments |
Optional | Additional options supported by mysql simple SQL shell. These options will be applied when executing the given file on the Azure DB for Mysql. |
name: 'Azure MYSQL Deploy'
description: 'Deploy to Azure MySQL database using SQL script files'
inputs:
server-name:
description: 'Server name of Azure DB for Mysql. Example: fabrikam.mysql.database.azure.com. When you connect using Mysql Workbench, this is the same value that is used for Hostname in Parameters'
required: true
connection-string:
description: 'The connection string, including authentication information, for the Azure MySQL Server. (deprecated)'
required: false
username:
description: 'Azure MySQL Server username for login'
required: false
password:
description: 'Azure MySQL Server password for login'
required: false
database:
description: 'Azure MySQL Server database (optional) to connect to. No database will be used automatically.'
required: false
sql-file:
description: 'Path to SQL script file to deploy'
required: true
arguments:
description: 'Additional options supported by mysql simple SQL shell. These options will be applied when executing the given file on the Azure DB for Mysql.'
required: false
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/julia-actions/julia-report-ci-results
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-report-ci-results
Report Julia CI results
| Name | Required | Description |
|---|---|---|
results-path |
Required | |
lint-results |
Optional |
name: 'Report Julia CI results'
description: 'Report Julia CI results'
inputs:
results-path:
type: string
required: true
lint-results:
required: false
runs:
using: "composite"
steps:
- name: Compute Manifest hash
id: project-hash
shell: pwsh
run: |
$ourHash = Get-FileHash -LiteralPath "$env:GITHUB_ACTION_PATH\Manifest.toml"
"MANIFEST_HASH=$($ourHash.Hash)" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
- name: Check Julia version
shell: bash
id: julia-version
run: |
echo "JULIA_VERSION=$(julia -v)" >> $GITHUB_OUTPUT
- uses: actions/cache@v4
id: cache-project
with:
path: ${{ runner.tool_cache }}/julia-run-testitems-depot
key: julia-report-ci-results-cache-${{ runner.os }}-${{ steps.julia-version.outputs.JULIA_VERSION }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Install and precompile
if: steps.cache-project.outputs.cache-hit != 'true'
run: julia -e 'import Pkg; Pkg.instantiate()'
shell: bash
env:
JULIA_PROJECT: ${{ github.action_path }}
JULIA_DEPOT_PATH: ${{ runner.tool_cache }}/julia-run-testitems-depot
- uses: actions/cache/save@v4
if: steps.cache-project.outputs.cache-hit != 'true'
with:
path: ${{ runner.tool_cache }}/julia-run-testitems-depot
key: julia-report-ci-results-cache-${{ runner.os }}-${{ steps.julia-version.outputs.JULIA_VERSION }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Run test items
run: julia --project=${{ github.action_path }} ${{ github.action_path }}/main.jl
shell: pwsh
env:
JULIA_DEPOT_PATH: ${{ runner.tool_cache }}/julia-run-testitems-depot
RESULTS_PATH: ${{ inputs.results-path }}
LINT_RESULTS: ${{ inputs.lint-results }}
Action ID: marketplace/vsoch/action-updater
Author: Unknown
Publisher: vsoch
Repository: github.com/vsoch/action-updater
Check for updates to your actions
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub token |
updaters |
Optional | Choose named updaters to run (comma separated value, no spaces) |
path |
Optional | path to file or directory to check Default: .github/workflows |
settings_file |
Optional | custom settings file for updater |
args |
Optional | additional args to provide to 'detect' or 'update' commands |
version |
Optional | release of updater to use |
allow_fail |
Optional | allow a failure (only relevant if pull_request is false) |
name: "action-updater"
description: "Check for updates to your actions"
branding:
icon: 'activity'
color: 'green'
inputs:
token:
description: GitHub token
required: false
updaters:
description: Choose named updaters to run (comma separated value, no spaces)
required: false
path:
description: path to file or directory to check
default: .github/workflows
settings_file:
description: custom settings file for updater
required: false
args:
description: additional args to provide to 'detect' or 'update' commands
required: false
version:
description: release of updater to use
required: false
allow_fail:
description: allow a failure (only relevant if pull_request is false)
default: false
runs:
using: "composite"
steps:
- name: Install Action Updater
env:
version: ${{ inputs.version }}
run: |
if [[ "${version}" == "" ]]; then
pip install git+https://github.com/vsoch/action-updater.git@main
else
pip install action-updater@${version}
fi
shell: bash
- name: Detect Action Updates
env:
path: ${{ inputs.path }}
settings_file: ${{ inputs.settings_file }}
args: ${{ inputs.args }}
updaters: ${{ inputs.updaters }}
GITHUB_TOKEN: ${{ inputs.token }}
run: |
# If pwd is provided, ensure we get the entire path
if [[ "${path}" == "." ]]; then
path=$(pwd)
echo "path=${path}" >> ${GITHUB_ENV}
fi
cmd="action-updater"
if [[ "${settings_file}" != "" ]]; then
cmd="$cmd --settings-file ${settings_file}"
fi
cmd="${cmd} detect"
if [[ "${updaters}" != "" ]]; then
cmd="${cmd} --updaters ${updaters}"
fi
cmd="${cmd} ${path} ${args}"
printf "${cmd}\n"
$cmd && retval=0 || retval=1
echo "retval=${retval}" >> $GITHUB_ENV
shell: bash
- name: Exit on failure (updates)
env:
allow_fail: ${{ inputs.allow_fail }}
retval: ${{ env.retval }}
run: |
if [[ "${retval}" != "0" ]] && [[ "${allow_fail}" == "false" ]]; then
printf "Detect found changes, and allow_fail is false."
exit 1
elif [[ "${retval}" != "0" ]] && [[ "${allow_fail}" == "true" ]]; then
printf "Detect found changes, and allow_fail is true."
exit 0
fi
printf "Return value is ${retval}, no changes needed!\n"
shell: bash
Action ID: marketplace/aws-actions/amazon-ecs-deploy-express-service
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/amazon-ecs-deploy-express-service
Creates or updates an Amazon ECS Express Mode service
| Name | Required | Description |
|---|---|---|
service-name |
Required | The name of the ECS Express service. Used for both creating new services and updating existing ones. |
image |
Required | The container image URI to deploy (e.g., 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest) |
execution-role-arn |
Required | The ARN of the task execution role that grants the ECS agent permission to pull container images and publish logs |
infrastructure-role-arn |
Required | The ARN of the infrastructure role that grants ECS permission to create and manage AWS resources (ALB, target groups, etc.) |
cluster |
Optional | The name of the ECS cluster. Will default to the 'default' cluster. Default: default |
container-port |
Optional | The port number on the container that receives traffic (containerPort in primaryContainer). If not specified, Express Mode will use port 80. |
environment-variables |
Optional | Environment variables to set in the container (environment in primaryContainer). Provide as JSON array: [{"name":"KEY","value":"VALUE"}] |
secrets |
Optional | Secrets to inject into the container (secrets in primaryContainer). Provide as JSON array: [{"name":"KEY","valueFrom":"arn:aws:secretsmanager:..."}] |
command |
Optional | Override the default container command (command in primaryContainer). Provide as JSON array: ["node","server.js"] |
log-group |
Optional | CloudWatch Logs log group name for container logs (logGroup in awsLogsConfiguration). If not specified, Express Mode creates a log group automatically. |
log-stream-prefix |
Optional | CloudWatch Logs stream prefix for container logs (logStreamPrefix in awsLogsConfiguration). If not specified, Express Mode uses a default prefix. |
repository-credentials |
Optional | ARN of the secret containing credentials for private container registry (credentialsParameter in repositoryCredentials). Required for private registries outside ECR. |
cpu |
Optional | The number of CPU units to allocate (256, 512, 1024, 2048, 4096, 8192, 16384). If not specified, Express Mode defaults to 256 (.25 vCPU). |
memory |
Optional | The amount of memory in MiB to allocate (512, 1024, 2048, 4096, 8192, 16384, 30720, 61440, 122880). If not specified, Express Mode defaults to 512 MiB. |
task-role-arn |
Optional | The ARN of the IAM role that the container can assume to make AWS API calls (taskRoleArn) |
subnets |
Optional | Comma-separated list of subnet IDs for the service (subnets in networkConfiguration). If not specified, Express Mode uses the default VPC. |
security-groups |
Optional | Comma-separated list of security group IDs for the service (securityGroups in networkConfiguration). |
health-check-path |
Optional | The path for ALB health checks (healthCheckPath). If not specified, Express Mode defaults to /ping. |
min-task-count |
Optional | Minimum number of tasks for auto-scaling (minTaskCount in scalingTarget). Must be less than or equal to max-task-count. |
max-task-count |
Optional | Maximum number of tasks for auto-scaling (maxTaskCount in scalingTarget). Must be greater than or equal to min-task-count. |
auto-scaling-metric |
Optional | The metric to use for auto-scaling (autoScalingMetric in scalingTarget): AVERAGE_CPU, AVERAGE_MEMORY, or REQUEST_COUNT_PER_TARGET |
auto-scaling-target-value |
Optional | The target value for the auto-scaling metric (autoScalingTargetValue in scalingTarget). For example, 60 for 60% CPU utilization. |
| Name | Description |
|---|---|
service-arn |
The ARN of the deployed Express service |
endpoint |
The endpoint URL of the service (from the Application Load Balancer) |
name: 'Amazon ECS "Deploy Express Service" Action for GitHub Actions'
description: 'Creates or updates an Amazon ECS Express Mode service'
branding:
icon: 'cloud'
color: 'orange'
inputs:
# Required inputs
service-name:
description: 'The name of the ECS Express service. Used for both creating new services and updating existing ones.'
required: true
image:
description: 'The container image URI to deploy (e.g., 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest)'
required: true
execution-role-arn:
description: 'The ARN of the task execution role that grants the ECS agent permission to pull container images and publish logs'
required: true
infrastructure-role-arn:
description: 'The ARN of the infrastructure role that grants ECS permission to create and manage AWS resources (ALB, target groups, etc.)'
required: true
# Service identification
cluster:
description: "The name of the ECS cluster. Will default to the 'default' cluster."
required: false
default: 'default'
# Primary container configuration
container-port:
description: 'The port number on the container that receives traffic (containerPort in primaryContainer). If not specified, Express Mode will use port 80.'
required: false
environment-variables:
description: 'Environment variables to set in the container (environment in primaryContainer). Provide as JSON array: [{"name":"KEY","value":"VALUE"}]'
required: false
secrets:
description: 'Secrets to inject into the container (secrets in primaryContainer). Provide as JSON array: [{"name":"KEY","valueFrom":"arn:aws:secretsmanager:..."}]'
required: false
command:
description: 'Override the default container command (command in primaryContainer). Provide as JSON array: ["node","server.js"]'
required: false
log-group:
description: 'CloudWatch Logs log group name for container logs (logGroup in awsLogsConfiguration). If not specified, Express Mode creates a log group automatically.'
required: false
log-stream-prefix:
description: 'CloudWatch Logs stream prefix for container logs (logStreamPrefix in awsLogsConfiguration). If not specified, Express Mode uses a default prefix.'
required: false
repository-credentials:
description: 'ARN of the secret containing credentials for private container registry (credentialsParameter in repositoryCredentials). Required for private registries outside ECR.'
required: false
# Resource configuration
cpu:
description: 'The number of CPU units to allocate (256, 512, 1024, 2048, 4096, 8192, 16384). If not specified, Express Mode defaults to 256 (.25 vCPU).'
required: false
memory:
description: 'The amount of memory in MiB to allocate (512, 1024, 2048, 4096, 8192, 16384, 30720, 61440, 122880). If not specified, Express Mode defaults to 512 MiB.'
required: false
task-role-arn:
description: 'The ARN of the IAM role that the container can assume to make AWS API calls (taskRoleArn)'
required: false
# Network configuration
subnets:
description: 'Comma-separated list of subnet IDs for the service (subnets in networkConfiguration). If not specified, Express Mode uses the default VPC.'
required: false
security-groups:
description: 'Comma-separated list of security group IDs for the service (securityGroups in networkConfiguration).'
required: false
# Health check configuration
health-check-path:
description: 'The path for ALB health checks (healthCheckPath). If not specified, Express Mode defaults to /ping.'
required: false
# Auto-scaling configuration
min-task-count:
description: 'Minimum number of tasks for auto-scaling (minTaskCount in scalingTarget). Must be less than or equal to max-task-count.'
required: false
max-task-count:
description: 'Maximum number of tasks for auto-scaling (maxTaskCount in scalingTarget). Must be greater than or equal to min-task-count.'
required: false
auto-scaling-metric:
description: 'The metric to use for auto-scaling (autoScalingMetric in scalingTarget): AVERAGE_CPU, AVERAGE_MEMORY, or REQUEST_COUNT_PER_TARGET'
required: false
auto-scaling-target-value:
description: 'The target value for the auto-scaling metric (autoScalingTargetValue in scalingTarget). For example, 60 for 60% CPU utilization.'
required: false
outputs:
service-arn:
description: 'The ARN of the deployed Express service'
endpoint:
description: 'The endpoint URL of the service (from the Application Load Balancer)'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/appleboy/whisper-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/whisper-action
High-performance inference of OpenAI Whisper automatic speech recognition (ASR) model
| Name | Required | Description |
|---|---|---|
model |
Optional | public whisper model. (available: small, medium and large) Default: small |
audio_path |
Optional | Audio Path. |
output_folder |
Optional | output folder. |
output_format |
Optional | output format, support txt, srt, csv. Default: txt |
output_filename |
Optional | output filename. |
debug |
Optional | enable debug mode. |
print_progress |
Optional | print progress. Default: true |
print_segment |
Optional | print segment. |
youtube_url |
Optional | youtube url |
translate |
Optional | translate from source language to english |
cut_silences |
Optional | cut silences |
prompt |
Optional | initial prompt |
name: 'Speech to Text OpenAI Whisper'
description: 'High-performance inference of OpenAI Whisper automatic speech recognition (ASR) model'
author: 'Bo-Yi Wu'
inputs:
model:
description: 'public whisper model. (available: small, medium and large)'
default: 'small'
audio_path:
description: 'Audio Path.'
output_folder:
description: 'output folder.'
output_format:
description: 'output format, support txt, srt, csv.'
default: 'txt'
output_filename:
description: 'output filename.'
debug:
description: 'enable debug mode.'
print_progress:
description: 'print progress.'
default: 'true'
print_segment:
description: 'print segment.'
youtube_url:
description: 'youtube url'
translate:
description: 'translate from source language to english'
cut_silences:
description: 'cut silences'
prompt:
description: 'initial prompt'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'terminal'
color: 'gray-dark'
Action ID: marketplace/azure/spring-apps-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/spring-apps-deploy
Deploy applications to Azure Spring Apps and manage deployments.
| Name | Required | Description |
|---|---|---|
azure-subscription |
Required | Select the Azure Resource Manager subscription for the deployment. |
action |
Required | Action to be performed on Azure Spring Apps. Default: deploy |
service-name |
Required | Select the Azure Spring Apps service to which to deploy. |
app-name |
Required | Select the Azure Spring Apps app to deploy. |
use-staging-deployment |
Required | Automatically select the deployment that's set as Staging at the time the task runs. Default: True |
create-new-deployment |
Optional | Whether to target the deployment that's set as Staging at the time of execution. If unchecked, the 'Deployment Name' setting must be set. |
deployment-name |
Optional | The deployment to which this task will apply. Lowercase letters, - and numbers only; must start with a letter. |
package |
Optional | File path to the package or a folder containing the Spring Apps app contents. Default: ${{ github.workspace }}/**/*.jar |
target-module |
Optional | Child module to be deployed, required for multiple jar packages built from source code. |
cpu |
Optional | The CPU resource quantity. It should be 500m or number of CPU cores. It is effective only when creating new deployment. Default: 1 |
memory |
Optional | The memory resource quantity. It should be 512Mi or #Gi, e.g., 1Gi, 3Gi. It is effective only when creating new deployment. Default: 1Gi |
environment-variables |
Optional | Edit the app's environment variables. |
jvm-options |
Optional | Edit the app's JVM options. A String containing JVM Options. Example: `-Xms1024m -Xmx2048m` |
runtime-version |
Optional | The runtime on which the app will run. |
dotnetcore-mainentry-path |
Optional | The path to the .NET executable relative to zip root. |
version |
Optional | The runtime on which the app will run. |
build-name |
Optional | (Enterprise Tier Only) The build name. |
builder |
Optional | (Enterprise Tier Only) Build service builder used to build the executable. |
build-cpu |
Optional | (Enterprise Tier Only) CPU resource quantity for build container. Should be 500m or number of CPU cores. Default: 1 |
build-memory |
Optional | (Enterprise Tier Only) Memory resource quantity for build container. Should be 512Mi or #Gi, e.g., 1Gi, 3Gi. Default: 2Gi. |
build-env |
Optional | (Enterprise Tier Only) Space-separated environment variables for the build process in 'key[=value]' format. |
config-file-patterns |
Optional | (Enterprise Tier Only) Config file patterns separated with ',' to decide which patterns of Application Configuration Service will be used. Use '""' to clear existing configurations. |
container-registry |
Optional | The registry of the container image. Default: docker.io. Default: docker.io |
registry-username |
Optional | The username of the container registry. |
registry-password |
Optional | The password of the container registry. |
container-image |
Optional | The container image. |
container-command |
Optional | The command of the container. |
container-args |
Optional | The arguments of the container. |
language-framework |
Optional | The language framework of the container. |
enable-liveness-probe |
Optional | If false, will disable the liveness probe of the app instance. Allowed values: false, true. |
enable-readiness-probe |
Optional | If false, will disable the readiness probe of the app instance. Allowed values: false, true. |
enable-startup-probe |
Optional | If false, will disable the startup probe of the app instance. Allowed values: false, true. |
termination-grace-period-seconds |
Optional | Optional duration in seconds the app instance needs to terminate gracefully. |
liveness-probe-config |
Optional | A json file path indicates the liveness probe config |
readiness-probe-config |
Optional | A json file path indicates the readiness probe config |
startup-probe-config |
Optional | A json file path indicates the startup probe config |
# azure spirng apps action
name: 'Azure Spring Apps'
description: 'Deploy applications to Azure Spring Apps and manage deployments.'
inputs:
azure-subscription:
description: 'Select the Azure Resource Manager subscription for the deployment.'
required: true
action:
description: 'Action to be performed on Azure Spring Apps.'
required: true
default: 'deploy'
service-name:
description: 'Select the Azure Spring Apps service to which to deploy.'
required: true
app-name:
description: 'Select the Azure Spring Apps app to deploy.'
required: true
use-staging-deployment:
description: "Automatically select the deployment that's set as Staging at the time the task runs."
required: true
default: true
create-new-deployment:
description: "Whether to target the deployment that's set as Staging at the time of execution. If unchecked, the 'Deployment Name' setting must be set."
required: false
default: false
deployment-name:
description: 'The deployment to which this task will apply. Lowercase letters, - and numbers only; must start with a letter.'
required: false
package:
description: "File path to the package or a folder containing the Spring Apps app contents."
required: false
default: '${{ github.workspace }}/**/*.jar'
target-module:
description: "Child module to be deployed, required for multiple jar packages built from source code."
required: false
cpu:
description: "The CPU resource quantity. It should be 500m or number of CPU cores. It is effective only when creating new deployment."
required: false
default: '1'
memory:
description: "The memory resource quantity. It should be 512Mi or #Gi, e.g., 1Gi, 3Gi. It is effective only when creating new deployment."
required: false
default: '1Gi'
environment-variables:
description: "Edit the app's environment variables."
required: false
jvm-options:
description: "Edit the app's JVM options. A String containing JVM Options. Example: `-Xms1024m -Xmx2048m`"
required: false
runtime-version:
description: 'The runtime on which the app will run.'
required: false
dotnetcore-mainentry-path:
description: 'The path to the .NET executable relative to zip root.'
required: false
version:
description: 'The runtime on which the app will run.'
required: false
build-name:
description: '(Enterprise Tier Only) The build name.'
required: false
builder:
description: '(Enterprise Tier Only) Build service builder used to build the executable.'
required: false
build-cpu:
description: '(Enterprise Tier Only) CPU resource quantity for build container. Should be 500m or number of CPU cores. Default: 1'
required: false
build-memory:
description: '(Enterprise Tier Only) Memory resource quantity for build container. Should be 512Mi or #Gi, e.g., 1Gi, 3Gi. Default: 2Gi.'
required: false
build-env:
description: "(Enterprise Tier Only) Space-separated environment variables for the build process in 'key[=value]' format."
required: false
config-file-patterns:
description: "(Enterprise Tier Only) Config file patterns separated with ',' to decide which patterns of Application Configuration Service will be used. Use '\"\"' to clear existing configurations."
required: false
container-registry:
description: "The registry of the container image. Default: docker.io."
required: false
default: "docker.io"
registry-username:
description: "The username of the container registry."
required: false
registry-password:
description: "The password of the container registry."
required: false
container-image:
description: "The container image."
required: false
container-command:
description: "The command of the container."
required: false
container-args:
description: "The arguments of the container."
required: false
language-framework:
description: "The language framework of the container."
required: false
enable-liveness-probe:
description: "If false, will disable the liveness probe of the app instance. Allowed values: false, true."
required: false
enable-readiness-probe:
description: "If false, will disable the readiness probe of the app instance. Allowed values: false, true."
required: false
enable-startup-probe:
description: "If false, will disable the startup probe of the app instance. Allowed values: false, true."
required: false
termination-grace-period-seconds:
description: "Optional duration in seconds the app instance needs to terminate gracefully."
required: false
liveness-probe-config:
description: "A json file path indicates the liveness probe config"
required: false
readiness-probe-config:
description: "A json file path indicates the readiness probe config"
required: false
startup-probe-config:
description: "A json file path indicates the startup probe config"
required: false
branding:
icon: 'icon.svg'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/tibdex/sync-branches
Author: Thibault Derousseaux <tibdex@gmail.com>
Publisher: tibdex
Repository: github.com/tibdex/sync-branches
Automatically merge a repository branch into other branches to keep them in sync.
| Name | Required | Description |
|---|---|---|
body_template |
Optional | Lodash template for the syncing PR's body.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
Default: Sync `<%= base %>` with `<%= head %>`. |
branches_pattern |
Optional | Branches matching this `minimatch` pattern will be synced with the branch from which this action is run. If empty, all the protected branches will be synced. |
github_token |
Optional | Token for the GitHub API. Default: ${{ github.token }} |
head_template |
Optional | Lodash template for the syncing PR's head branch.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
Default: sync-<%= base %>-with-<%= head %> |
labels_template |
Optional | Lodash template compiling to a JSON array of labels to add to the syncing PR.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
Default: [] |
title_template |
Optional | Lodash template for the syncing PR's title.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
Default: Sync `<%= base %>` with `<%= head %>` |
| Name | Description |
|---|---|
created_pull_requests |
A JSON stringified object mapping the base branch of the created pull requests to their number. |
name: Sync branches
author: Thibault Derousseaux <tibdex@gmail.com>
description: Automatically merge a repository branch into other branches to keep them in sync.
inputs:
body_template:
description: >
Lodash template for the syncing PR's body.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
default: "Sync `<%= base %>` with `<%= head %>`."
branches_pattern:
description: >
Branches matching this `minimatch` pattern will be synced with the branch from which this action is run.
If empty, all the protected branches will be synced.
default: ""
github_token:
description: Token for the GitHub API.
default: ${{ github.token }}
head_template:
description: >
Lodash template for the syncing PR's head branch.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
default: "sync-<%= base %>-with-<%= head %>"
labels_template:
description: >
Lodash template compiling to a JSON array of labels to add to the syncing PR.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
default: "[]"
title_template:
description: >
Lodash template for the syncing PR's title.
The data properties are:
- base: syncing PR's base branch
- head: branch to merge into `base`.
default: "Sync `<%= base %>` with `<%= head %>`"
outputs:
created_pull_requests:
description: A JSON stringified object mapping the base branch of the created pull requests to their number.
runs:
using: node16
main: dist/index.js
branding:
icon: git-merge
color: purple
Action ID: marketplace/amirisback/android-copyable-text
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-copyable-text
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/aws-actions/codeguru-reviewer
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/codeguru-reviewer
AWS CodeGuru Reviewer Action
| Name | Required | Description |
|---|---|---|
name |
Optional | Repository name Default: ${{ github.repository }} |
source_path |
Optional | Path to Java source repository Default: . |
build_path |
Optional | Path to build artifact(s) - jar or class files in this directory will be uploaded for review |
destination_commit |
Optional | SHA of next commit to be committed to source code repository after an event Default: ${{ github.event.before || github.event.pull_request.base.sha }} |
source_commit |
Optional | SHA of previous commit in the source code repository existed before an event Default: ${{ github.event.after || github.event.pull_request.head.sha }} |
merge_base_commit |
Optional | SHA of a commit thats the merge base for before and after commits in a pull or merge request Default: ${{ github.event.pull_request.merge_commit_sha }} |
source_branch |
Optional | Source branch of the event Default: ${{ github.head_ref || github.ref}} |
destination_branch |
Optional | Destination branch of the event Default: ${{ github.base_ref || github.ref }} |
kms_key_id |
Optional | AWS KMS Key ID to use for encrypting source code/build artifacts. By default, Amazon-owned encryption key is used. Supplying this value overrides it to use Customer-owned encryption key |
author |
Optional | Author/Actor who triggered an event in the source code repository Default: ${{ github.event.actor }} |
event_id |
Optional | An identifier for the event that triggered CodeGuru Reviewer Analysis, e.g. Pull or Merge request id Default: ${{ github.event.number || github.run_number }} |
event_name |
Optional | Name of the event that triggered the CI/CD workflow [supported Events: push, pull, merge_request_event, schedule, workflow_dispatch] Default: ${{ github.event_name }} |
event_state |
Optional | State of the event that triggered the CI/CD workflow [example: pull_request: "opened"] Default: ${{ github.event.action }} |
client_id |
Optional | Unique identifier referring to a specific client version Default: GithubActions@v1 |
s3_bucket |
Required | S3 Bucket which will be used for code reviews |
vendor_name |
Optional | Vendor Name(e.g. GITHUB, GITLAB) for CI/CD. Default: GITHUB |
output_format |
Optional | Expected format for the results (e.g SARIF, SAST, JENKINS) Default: SARIF |
name: 'CodeGuru Reviewer'
description: 'AWS CodeGuru Reviewer Action'
branding:
icon: 'cloud'
color: 'orange'
inputs:
name:
description: 'Repository name'
default: ${{ github.repository }}
required: false
source_path:
description: 'Path to Java source repository'
default: .
required: false
build_path:
description: 'Path to build artifact(s) - jar or class files in this directory will be uploaded for review'
required: false
destination_commit:
description: 'SHA of next commit to be committed to source code repository after an event'
default: ${{ github.event.before || github.event.pull_request.base.sha }}
required: false
source_commit:
description: 'SHA of previous commit in the source code repository existed before an event'
default: ${{ github.event.after || github.event.pull_request.head.sha }}
required: false
merge_base_commit:
description: 'SHA of a commit thats the merge base for before and after commits in a pull or merge request'
default: ${{ github.event.pull_request.merge_commit_sha }}
required: false
source_branch:
description: 'Source branch of the event'
default: ${{ github.head_ref || github.ref}}
required: false
destination_branch:
description: 'Destination branch of the event'
default: ${{ github.base_ref || github.ref }}
required: false
kms_key_id:
description: 'AWS KMS Key ID to use for encrypting source code/build artifacts. By default, Amazon-owned encryption key is used. Supplying this value overrides it to use Customer-owned encryption key'
required: false
author:
description: 'Author/Actor who triggered an event in the source code repository'
default: ${{ github.event.actor }}
required: false
event_id:
description: 'An identifier for the event that triggered CodeGuru Reviewer Analysis, e.g. Pull or Merge request id'
default: ${{ github.event.number || github.run_number }}
required: false
event_name:
description: 'Name of the event that triggered the CI/CD workflow [supported Events: push, pull, merge_request_event, schedule, workflow_dispatch]'
default: ${{ github.event_name }}
required: false
event_state:
description: 'State of the event that triggered the CI/CD workflow [example: pull_request: "opened"]'
default: ${{ github.event.action }}
required: false
client_id:
description: 'Unique identifier referring to a specific client version'
default: GithubActions@v1
required: false
s3_bucket:
description: 'S3 Bucket which will be used for code reviews'
required: true
vendor_name:
description: 'Vendor Name(e.g. GITHUB, GITLAB) for CI/CD.'
default: GITHUB
required: false
output_format:
description: 'Expected format for the results (e.g SARIF, SAST, JENKINS)'
default: SARIF
required: false
runs:
using: 'docker'
image: docker://public.ecr.aws/i6i1s7m3/codegurureviewer-actions-public:latest
args:
- --name
- ${{ inputs.name }}
- --source_path
- ${{ inputs.source_path }}
- --build_path
- ${{ inputs.build_path }}
- --destination_branch
- ${{ inputs.destination_branch }}
- --before_commit_sha
- ${{ inputs.destination_commit }}
- --source_branch
- ${{ inputs.source_branch }}
- --after_commit_sha
- ${{ inputs.source_commit }}
- --kms_key_id
- ${{ inputs.kms_key_id }}
- --agent
- ${{ inputs.author }}
- --merge_commit_sha
- ${{ inputs.merge_base_commit }}
- --event_id
- ${{ inputs.event_id }}
- --event_name
- ${{ inputs.event_name }}
- --event_state
- ${{ inputs.event_state }}
- --client_id
- ${{ inputs.client_id }}
- --s3_bucket
- ${{ inputs.s3_bucket }}
- --vendor_name
- ${{ inputs.vendor_name }}
- --output_format
- ${{ inputs.output_format }}
Action ID: marketplace/appleboy/database-backup-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/database-backup-action
Docker image to periodically backup a your database to AWS S3
| Name | Required | Description |
|---|---|---|
database_driver |
Optional | support `postgres`, `mysql` or `mongo`. default is `postgres` Default: postgres |
database_username |
Optional | database username |
database_password |
Optional | database password |
database_name |
Optional | database name |
database_host |
Optional | database host |
database_opts |
Optional | see the `pg_dump`, `mysqldump` or `mongodump` command |
storage_driver |
Optional | support `s3` or `disk`. default is `s3` Default: s3 |
access_key_id |
Optional | minio or aws s3 access key id |
secret_access_key |
Optional | minio or aws s3 secret access key |
storage_endpoint |
Optional | s3 endpoint. default is `s3.amazonaws.com` Default: s3.amazonaws.com |
storage_bucket |
Optional | s3 bucket name |
storage_region |
Optional | s3 region. default is `ap-northeast-1` Default: ap-northeast-1 |
storage_path |
Optional | backup folder path in bucket. default is `backup` and all dump file will save in `bucket/backup` directory Default: backup |
storage_ssl |
Optional | default is `false` Default: false |
storage_insecure_skip_verify |
Optional | default is `false` Default: false |
name: 'Online Backup Database'
description: 'Docker image to periodically backup a your database to AWS S3'
author: 'Bo-Yi Wu'
inputs:
database_driver:
description: support `postgres`, `mysql` or `mongo`. default is `postgres`
default: postgres
database_username:
description: database username
database_password:
description: database password
database_name:
description: database name
database_host:
description: database host
database_opts:
description: see the `pg_dump`, `mysqldump` or `mongodump` command
storage_driver:
description: support `s3` or `disk`. default is `s3`
default: s3
access_key_id:
description: minio or aws s3 access key id
secret_access_key:
description: minio or aws s3 secret access key
storage_endpoint:
description: s3 endpoint. default is `s3.amazonaws.com`
default: s3.amazonaws.com
storage_bucket:
description: s3 bucket name
storage_region:
description: s3 region. default is `ap-northeast-1`
default: ap-northeast-1
storage_path:
description: backup folder path in bucket. default is `backup` and all dump file will save in `bucket/backup` directory
default: backup
storage_ssl:
description: default is `false`
default: "false"
storage_insecure_skip_verify:
description: default is `false`
default: "false"
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'terminal'
color: 'gray-dark'
Action ID: marketplace/actions/deploy-pages
Author: GitHub
Publisher: actions
Repository: github.com/actions/deploy-pages
A GitHub Action to deploy an artifact as a GitHub Pages site
| Name | Required | Description |
|---|---|---|
token |
Required | GitHub token Default: ${{ github.token }} |
timeout |
Optional | Time in milliseconds after which to timeout and cancel the deployment (default: 10 minutes) Default: 600000 |
error_count |
Optional | Maximum number of status report errors before cancelling a deployment (default: 10) Default: 10 |
reporting_interval |
Optional | Time in milliseconds between two deployment status report (default: 5 seconds) Default: 5000 |
artifact_name |
Optional | Name of the artifact to deploy Default: github-pages |
preview |
Optional | Is this attempting to deploy a pull request as a GitHub Pages preview site? (NOTE: This feature is only in alpha currently and is not available to the public!) Default: false |
| Name | Description |
|---|---|
page_url |
URL to deployed GitHub Pages |
name: 'Deploy GitHub Pages site'
description: 'A GitHub Action to deploy an artifact as a GitHub Pages site'
author: 'GitHub'
runs:
using: 'node20'
main: 'dist/index.js'
inputs:
token:
description: 'GitHub token'
default: ${{ github.token }}
required: true
timeout:
description: 'Time in milliseconds after which to timeout and cancel the deployment (default: 10 minutes)'
required: false
default: '600000'
error_count:
description: 'Maximum number of status report errors before cancelling a deployment (default: 10)'
required: false
default: '10'
reporting_interval:
description: 'Time in milliseconds between two deployment status report (default: 5 seconds)'
required: false
default: '5000'
artifact_name:
description: 'Name of the artifact to deploy'
required: false
default: 'github-pages'
preview:
description: 'Is this attempting to deploy a pull request as a GitHub Pages preview site? (NOTE: This feature is only in alpha currently and is not available to the public!)'
required: false
default: 'false'
outputs:
page_url:
description: 'URL to deployed GitHub Pages'
Action ID: marketplace/julia-actions/install-julia-from-url
Author: Sascha Mann
Publisher: julia-actions
Repository: github.com/julia-actions/install-julia-from-url
Install Julia from a given URL.
| Name | Required | Description |
|---|---|---|
url |
Required | URL pointing at the archive. |
target-dir |
Optional | Directory that Julia will be installed in Default: $HOME/julia |
name: Install Julia from URL
description: Install Julia from a given URL.
author: Sascha Mann
branding:
icon: download
color: purple
inputs:
url:
description: URL pointing at the archive.
default: ''
required: true
target-dir:
description: Directory that Julia will be installed in
default: $HOME/julia
required: false
runs:
using: composite
steps:
- name: Download and extract Julia
run: |
mkdir -p "${{ inputs.target-dir }}"
curl -LsS "${{ inputs.url }}" | tar -xz --strip-components=1 -C "${{ inputs.target-dir }}"
shell: bash
- name: Add to PATH
run: echo "${{ inputs.target-dir }}/bin" >> "$GITHUB_PATH"
shell: bash
- name: Print Version
run: julia --compile=min -O0 -e "using InteractiveUtils; versioninfo()"
shell: bash
Action ID: marketplace/rhysd/osv-scanner
Author: Unknown
Publisher: rhysd
Repository: github.com/rhysd/osv-scanner
Scans your directory against the OSV database
| Name | Required | Description |
|---|---|---|
to-scan |
Required | Directory to scan Default: /github/workspace |
version |
Required | osv-scanner version to use |
arch |
Optional | osv-scanner architecture Default: amd64 |
name: 'osv-scanner'
description: 'Scans your directory against the OSV database'
inputs:
to-scan:
description: 'Directory to scan'
required: true
default: '/github/workspace'
version:
description: 'osv-scanner version to use'
required: true
arch:
description: 'osv-scanner architecture'
required: false
default: 'amd64'
runs:
using: 'docker'
image: 'ghcr.io/google/osv-scanner:${{ inputs.version }}-${{ inputs.arch }}'
args:
- '--skip-git'
- ${{ inputs.to-scan }}
Action ID: marketplace/andstor/file-existence-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/file-existence-action
GitHub Action to check for file existence
| Name | Required | Description |
|---|---|---|
files |
Required | Comma separated string with paths to files and directories to check for existence. |
ignore_case |
Optional | Ignore if a file has upper or lower cases. |
follow_symbolic_links |
Optional | Indicates whether to follow symbolic links. Default: True |
fail |
Optional | Makes the Action fail on missing files. |
allow_failure |
Optional | This variable is deprecated in favour of "fail". |
| Name | Description |
|---|---|
files_exists |
Whether the file(s) exists or not. |
name: 'File Existence'
description: 'GitHub Action to check for file existence'
author: 'André Storhaug'
branding:
icon: 'file-text'
color: 'green'
inputs:
files:
description: 'Comma separated string with paths to files and directories to check for existence.'
required: true
ignore_case:
description: 'Ignore if a file has upper or lower cases.'
default: false
required: false
follow_symbolic_links:
description: 'Indicates whether to follow symbolic links.'
default: true
required: false
fail:
description: 'Makes the Action fail on missing files.'
default: false
required: false
allow_failure:
description: 'This variable is deprecated in favour of "fail".'
required: false
outputs:
files_exists:
description: 'Whether the file(s) exists or not.'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/mheap/portal-sync-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/portal-sync-action
WIP Sync action for Konnect portal
| Name | Required | Description |
|---|---|---|
konnect_pat |
Required | Konnect authentication details |
name: Portal sync action
description: WIP Sync action for Konnect portal
runs:
using: docker
image: Dockerfile
branding:
icon: edit-2
color: orange
inputs:
konnect_pat:
description: Konnect authentication details
required: true
Action ID: marketplace/vsoch/contributor-ci
Author: Vanessa Sochat
Publisher: vsoch
Repository: github.com/vsoch/contributor-ci
Extract repository contribution metadata.
| Name | Required | Description |
|---|---|---|
results_dir |
Optional | If doing an extraction, save results here. Defaults to .cci in $PWD |
extract |
Optional | Run this kind of extraction (e.g., repos or all) Default: repos |
cfa |
Optional | Generate contributor friendliness assessment files for a repository instead. |
update |
Optional | Update an existing Contributor CI interface. |
update_random |
Optional | Only update a random selection of N extractors. This is intended for large projects. |
update_cfa |
Optional | Find and generate CFAs for new repos. |
config_file |
Required | The contributor-ci.yaml file with organization metadata. Default: contributor-ci.yaml |
extract_save_format |
Optional | Given using extract, change the default save format / structure from year/month/day to something else. This will force extraction of new data since we can't reliably say when it was last done. If you need to change the save format but want to use a cache, it's recommended to not change the format, but clean up directories you don't need after. |
name: "contributor-ci-action"
author: "Vanessa Sochat"
description: "Extract repository contribution metadata."
inputs:
results_dir:
description: "If doing an extraction, save results here. Defaults to .cci in $PWD"
required: false
extract:
description: "Run this kind of extraction (e.g., repos or all)"
required: false
default: repos
cfa:
description: "Generate contributor friendliness assessment files for a repository instead."
required: false
update:
description: "Update an existing Contributor CI interface."
required: false
update_random:
description: Only update a random selection of N extractors. This is intended for large projects.
required: false
update_cfa:
description: "Find and generate CFAs for new repos."
required: false
config_file:
description: "The contributor-ci.yaml file with organization metadata."
required: true
default: contributor-ci.yaml
extract_save_format:
description: |
Given using extract, change the default save format / structure from year/month/day to something else.
This will force extraction of new data since we can't reliably say when it was last done. If you need
to change the save format but want to use a cache, it's recommended to not change the format, but clean
up directories you don't need after.
required: false
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "link"
color: "blue"
Action ID: marketplace/appleboy/kubernetes-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/kubernetes-action
Generate a Kubeconfig or creating & updating K8s Deployments.
| Name | Required | Description |
|---|---|---|
server |
Required | Address of the Kubernetes cluster |
skip_tls_verify |
Optional | Skip validity check for server certificate (default: false) |
ca_cert |
Optional | PEM-encoded certificate authority certificates |
token |
Required | Kubernetes service account token |
namespace |
Optional | Kubernetes namespace |
proxy_url |
Optional | URLs with http, https, and socks5 |
templates |
Optional | Templates to render, supports glob pattern |
cluster_name |
Optional | Cluster name (default: "default") Default: default |
authinfo_name |
Optional | AuthInfo name (default: "default") Default: default |
context_name |
Optional | Context name (default: "default") Default: default |
deployment |
Optional | Name of the Kubernetes deployment to update |
container |
Optional | Name of the container within the deployment to update |
image |
Optional | New image and tag for the container |
output |
Optional | Output kubeconfig to file |
debug |
Optional | Enable debug mode (default: false) |
name: 'Deploy K8S Tool'
description: 'Generate a Kubeconfig or creating & updating K8s Deployments.'
author: 'Bo-Yi Wu'
inputs:
server:
description: 'Address of the Kubernetes cluster'
required: true
skip_tls_verify:
description: 'Skip validity check for server certificate (default: false)'
ca_cert:
description: 'PEM-encoded certificate authority certificates'
token:
description: 'Kubernetes service account token'
required: true
namespace:
description: 'Kubernetes namespace'
proxy_url:
description: 'URLs with http, https, and socks5'
templates:
description: 'Templates to render, supports glob pattern'
cluster_name:
description: 'Cluster name (default: "default")'
default: 'default'
authinfo_name:
description: 'AuthInfo name (default: "default")'
default: 'default'
context_name:
description: 'Context name (default: "default")'
default: 'default'
deployment:
description: 'Name of the Kubernetes deployment to update'
container:
description: 'Name of the container within the deployment to update'
image:
description: 'New image and tag for the container'
output:
description: 'Output kubeconfig to file'
debug:
description: 'Enable debug mode (default: false)'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: "cloud"
color: "blue"
Action ID: marketplace/CodelyTV/request-review-by-path
Author: CodelyTV
Publisher: CodelyTV
Repository: github.com/CodelyTV/request-review-by-path
Assign pull requests by the modified path
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Required | GitHub token |
mapping |
Required | Mapping for assigning the pull request |
name: 'Assign by path'
description: 'Assign pull requests by the modified path'
author: 'CodelyTV'
branding:
icon: 'at-sign'
color: 'white'
inputs:
GITHUB_TOKEN:
description: 'GitHub token'
required: true
mapping:
description: 'Mapping for assigning the pull request'
required: true
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/actions/create-github-app-token
Author: Gregor Martynus and Parker Brown
Publisher: actions
Repository: github.com/actions/create-github-app-token
GitHub Action for creating a GitHub App installation access token
| Name | Required | Description |
|---|---|---|
app-id |
Required | GitHub App ID |
private-key |
Required | GitHub App private key |
owner |
Optional | The owner of the GitHub App installation (defaults to current repository owner) |
repositories |
Optional | Comma or newline-separated list of repositories to install the GitHub App on (defaults to current repository if owner is unset) |
skip-token-revoke |
Optional | If true, the token will not be revoked when the current job is complete Default: false |
github-api-url |
Optional | The URL of the GitHub REST API. Default: ${{ github.api_url }} |
permission-actions |
Optional | The level of permission to grant the access token for GitHub Actions workflows, workflow runs, and artifacts. Can be set to 'read' or 'write'. |
permission-administration |
Optional | The level of permission to grant the access token for repository creation, deletion, settings, teams, and collaborators creation. Can be set to 'read' or 'write'. |
permission-checks |
Optional | The level of permission to grant the access token for checks on code. Can be set to 'read' or 'write'. |
permission-codespaces |
Optional | The level of permission to grant the access token to create, edit, delete, and list Codespaces. Can be set to 'read' or 'write'. |
permission-contents |
Optional | The level of permission to grant the access token for repository contents, commits, branches, downloads, releases, and merges. Can be set to 'read' or 'write'. |
permission-custom-properties-for-organizations |
Optional | The level of permission to grant the access token to view and edit custom properties for an organization, when allowed by the property. Can be set to 'read' or 'write'. |
permission-dependabot-secrets |
Optional | The level of permission to grant the access token to manage Dependabot secrets. Can be set to 'read' or 'write'. |
permission-deployments |
Optional | The level of permission to grant the access token for deployments and deployment statuses. Can be set to 'read' or 'write'. |
permission-email-addresses |
Optional | The level of permission to grant the access token to manage the email addresses belonging to a user. Can be set to 'read' or 'write'. |
permission-enterprise-custom-properties-for-organizations |
Optional | The level of permission to grant the access token for organization custom properties management at the enterprise level. Can be set to 'read', 'write', or 'admin'. |
permission-environments |
Optional | The level of permission to grant the access token for managing repository environments. Can be set to 'read' or 'write'. |
permission-followers |
Optional | The level of permission to grant the access token to manage the followers belonging to a user. Can be set to 'read' or 'write'. |
permission-git-ssh-keys |
Optional | The level of permission to grant the access token to manage git SSH keys. Can be set to 'read' or 'write'. |
permission-gpg-keys |
Optional | The level of permission to grant the access token to view and manage GPG keys belonging to a user. Can be set to 'read' or 'write'. |
permission-interaction-limits |
Optional | The level of permission to grant the access token to view and manage interaction limits on a repository. Can be set to 'read' or 'write'. |
permission-issues |
Optional | The level of permission to grant the access token for issues and related comments, assignees, labels, and milestones. Can be set to 'read' or 'write'. |
permission-members |
Optional | The level of permission to grant the access token for organization teams and members. Can be set to 'read' or 'write'. |
permission-metadata |
Optional | The level of permission to grant the access token to search repositories, list collaborators, and access repository metadata. Can be set to 'read' or 'write'. |
permission-organization-administration |
Optional | The level of permission to grant the access token to manage access to an organization. Can be set to 'read' or 'write'. |
permission-organization-announcement-banners |
Optional | The level of permission to grant the access token to view and manage announcement banners for an organization. Can be set to 'read' or 'write'. |
permission-organization-copilot-seat-management |
Optional | The level of permission to grant the access token for managing access to GitHub Copilot for members of an organization with a Copilot Business subscription. This property is in public preview and is subject to change. Can be set to 'write'. |
permission-organization-custom-org-roles |
Optional | The level of permission to grant the access token for custom organization roles management. Can be set to 'read' or 'write'. |
permission-organization-custom-properties |
Optional | The level of permission to grant the access token for repository custom properties management at the organization level. Can be set to 'read', 'write', or 'admin'. |
permission-organization-custom-roles |
Optional | The level of permission to grant the access token for custom repository roles management. Can be set to 'read' or 'write'. |
permission-organization-events |
Optional | The level of permission to grant the access token to view events triggered by an activity in an organization. Can be set to 'read'. |
permission-organization-hooks |
Optional | The level of permission to grant the access token to manage the post-receive hooks for an organization. Can be set to 'read' or 'write'. |
permission-organization-packages |
Optional | The level of permission to grant the access token for organization packages published to GitHub Packages. Can be set to 'read' or 'write'. |
permission-organization-personal-access-token-requests |
Optional | The level of permission to grant the access token for viewing and managing fine-grained personal access tokens that have been approved by an organization. Can be set to 'read' or 'write'. |
permission-organization-personal-access-tokens |
Optional | The level of permission to grant the access token for viewing and managing fine-grained personal access token requests to an organization. Can be set to 'read' or 'write'. |
permission-organization-plan |
Optional | The level of permission to grant the access token for viewing an organization's plan. Can be set to 'read'. |
permission-organization-projects |
Optional | The level of permission to grant the access token to manage organization projects and projects public preview (where available). Can be set to 'read', 'write', or 'admin'. |
permission-organization-secrets |
Optional | The level of permission to grant the access token to manage organization secrets. Can be set to 'read' or 'write'. |
permission-organization-self-hosted-runners |
Optional | The level of permission to grant the access token to view and manage GitHub Actions self-hosted runners available to an organization. Can be set to 'read' or 'write'. |
permission-organization-user-blocking |
Optional | The level of permission to grant the access token to view and manage users blocked by the organization. Can be set to 'read' or 'write'. |
permission-packages |
Optional | The level of permission to grant the access token for packages published to GitHub Packages. Can be set to 'read' or 'write'. |
permission-pages |
Optional | The level of permission to grant the access token to retrieve Pages statuses, configuration, and builds, as well as create new builds. Can be set to 'read' or 'write'. |
permission-profile |
Optional | The level of permission to grant the access token to manage the profile settings belonging to a user. Can be set to 'write'. |
permission-pull-requests |
Optional | The level of permission to grant the access token for pull requests and related comments, assignees, labels, milestones, and merges. Can be set to 'read' or 'write'. |
permission-repository-custom-properties |
Optional | The level of permission to grant the access token to view and edit custom properties for a repository, when allowed by the property. Can be set to 'read' or 'write'. |
permission-repository-hooks |
Optional | The level of permission to grant the access token to manage the post-receive hooks for a repository. Can be set to 'read' or 'write'. |
permission-repository-projects |
Optional | The level of permission to grant the access token to manage repository projects, columns, and cards. Can be set to 'read', 'write', or 'admin'. |
permission-secret-scanning-alerts |
Optional | The level of permission to grant the access token to view and manage secret scanning alerts. Can be set to 'read' or 'write'. |
permission-secrets |
Optional | The level of permission to grant the access token to manage repository secrets. Can be set to 'read' or 'write'. |
permission-security-events |
Optional | The level of permission to grant the access token to view and manage security events like code scanning alerts. Can be set to 'read' or 'write'. |
permission-single-file |
Optional | The level of permission to grant the access token to manage just a single file. Can be set to 'read' or 'write'. |
permission-starring |
Optional | The level of permission to grant the access token to list and manage repositories a user is starring. Can be set to 'read' or 'write'. |
permission-statuses |
Optional | The level of permission to grant the access token for commit statuses. Can be set to 'read' or 'write'. |
permission-team-discussions |
Optional | The level of permission to grant the access token to manage team discussions and related comments. Can be set to 'read' or 'write'. |
permission-vulnerability-alerts |
Optional | The level of permission to grant the access token to manage Dependabot alerts. Can be set to 'read' or 'write'. |
permission-workflows |
Optional | The level of permission to grant the access token to update GitHub Actions workflow files. Can be set to 'write'. |
| Name | Description |
|---|---|
token |
GitHub installation access token |
installation-id |
GitHub App installation ID |
app-slug |
GitHub App slug |
name: "Create GitHub App Token"
description: "GitHub Action for creating a GitHub App installation access token"
author: "Gregor Martynus and Parker Brown"
branding:
icon: "lock"
color: "gray-dark"
inputs:
app-id:
description: "GitHub App ID"
required: true
private-key:
description: "GitHub App private key"
required: true
owner:
description: "The owner of the GitHub App installation (defaults to current repository owner)"
required: false
repositories:
description: "Comma or newline-separated list of repositories to install the GitHub App on (defaults to current repository if owner is unset)"
required: false
skip-token-revoke:
description: "If true, the token will not be revoked when the current job is complete"
required: false
default: "false"
# Make GitHub API configurable to support non-GitHub Cloud use cases
# see https://github.com/actions/create-github-app-token/issues/77
github-api-url:
description: The URL of the GitHub REST API.
default: ${{ github.api_url }}
# <START GENERATED PERMISSIONS INPUTS>
permission-actions:
description: "The level of permission to grant the access token for GitHub Actions workflows, workflow runs, and artifacts. Can be set to 'read' or 'write'."
permission-administration:
description: "The level of permission to grant the access token for repository creation, deletion, settings, teams, and collaborators creation. Can be set to 'read' or 'write'."
permission-checks:
description: "The level of permission to grant the access token for checks on code. Can be set to 'read' or 'write'."
permission-codespaces:
description: "The level of permission to grant the access token to create, edit, delete, and list Codespaces. Can be set to 'read' or 'write'."
permission-contents:
description: "The level of permission to grant the access token for repository contents, commits, branches, downloads, releases, and merges. Can be set to 'read' or 'write'."
permission-custom-properties-for-organizations:
description: "The level of permission to grant the access token to view and edit custom properties for an organization, when allowed by the property. Can be set to 'read' or 'write'."
permission-dependabot-secrets:
description: "The level of permission to grant the access token to manage Dependabot secrets. Can be set to 'read' or 'write'."
permission-deployments:
description: "The level of permission to grant the access token for deployments and deployment statuses. Can be set to 'read' or 'write'."
permission-email-addresses:
description: "The level of permission to grant the access token to manage the email addresses belonging to a user. Can be set to 'read' or 'write'."
permission-enterprise-custom-properties-for-organizations:
description: "The level of permission to grant the access token for organization custom properties management at the enterprise level. Can be set to 'read', 'write', or 'admin'."
permission-environments:
description: "The level of permission to grant the access token for managing repository environments. Can be set to 'read' or 'write'."
permission-followers:
description: "The level of permission to grant the access token to manage the followers belonging to a user. Can be set to 'read' or 'write'."
permission-git-ssh-keys:
description: "The level of permission to grant the access token to manage git SSH keys. Can be set to 'read' or 'write'."
permission-gpg-keys:
description: "The level of permission to grant the access token to view and manage GPG keys belonging to a user. Can be set to 'read' or 'write'."
permission-interaction-limits:
description: "The level of permission to grant the access token to view and manage interaction limits on a repository. Can be set to 'read' or 'write'."
permission-issues:
description: "The level of permission to grant the access token for issues and related comments, assignees, labels, and milestones. Can be set to 'read' or 'write'."
permission-members:
description: "The level of permission to grant the access token for organization teams and members. Can be set to 'read' or 'write'."
permission-metadata:
description: "The level of permission to grant the access token to search repositories, list collaborators, and access repository metadata. Can be set to 'read' or 'write'."
permission-organization-administration:
description: "The level of permission to grant the access token to manage access to an organization. Can be set to 'read' or 'write'."
permission-organization-announcement-banners:
description: "The level of permission to grant the access token to view and manage announcement banners for an organization. Can be set to 'read' or 'write'."
permission-organization-copilot-seat-management:
description: "The level of permission to grant the access token for managing access to GitHub Copilot for members of an organization with a Copilot Business subscription. This property is in public preview and is subject to change. Can be set to 'write'."
permission-organization-custom-org-roles:
description: "The level of permission to grant the access token for custom organization roles management. Can be set to 'read' or 'write'."
permission-organization-custom-properties:
description: "The level of permission to grant the access token for repository custom properties management at the organization level. Can be set to 'read', 'write', or 'admin'."
permission-organization-custom-roles:
description: "The level of permission to grant the access token for custom repository roles management. Can be set to 'read' or 'write'."
permission-organization-events:
description: "The level of permission to grant the access token to view events triggered by an activity in an organization. Can be set to 'read'."
permission-organization-hooks:
description: "The level of permission to grant the access token to manage the post-receive hooks for an organization. Can be set to 'read' or 'write'."
permission-organization-packages:
description: "The level of permission to grant the access token for organization packages published to GitHub Packages. Can be set to 'read' or 'write'."
permission-organization-personal-access-token-requests:
description: "The level of permission to grant the access token for viewing and managing fine-grained personal access tokens that have been approved by an organization. Can be set to 'read' or 'write'."
permission-organization-personal-access-tokens:
description: "The level of permission to grant the access token for viewing and managing fine-grained personal access token requests to an organization. Can be set to 'read' or 'write'."
permission-organization-plan:
description: "The level of permission to grant the access token for viewing an organization's plan. Can be set to 'read'."
permission-organization-projects:
description: "The level of permission to grant the access token to manage organization projects and projects public preview (where available). Can be set to 'read', 'write', or 'admin'."
permission-organization-secrets:
description: "The level of permission to grant the access token to manage organization secrets. Can be set to 'read' or 'write'."
permission-organization-self-hosted-runners:
description: "The level of permission to grant the access token to view and manage GitHub Actions self-hosted runners available to an organization. Can be set to 'read' or 'write'."
permission-organization-user-blocking:
description: "The level of permission to grant the access token to view and manage users blocked by the organization. Can be set to 'read' or 'write'."
permission-packages:
description: "The level of permission to grant the access token for packages published to GitHub Packages. Can be set to 'read' or 'write'."
permission-pages:
description: "The level of permission to grant the access token to retrieve Pages statuses, configuration, and builds, as well as create new builds. Can be set to 'read' or 'write'."
permission-profile:
description: "The level of permission to grant the access token to manage the profile settings belonging to a user. Can be set to 'write'."
permission-pull-requests:
description: "The level of permission to grant the access token for pull requests and related comments, assignees, labels, milestones, and merges. Can be set to 'read' or 'write'."
permission-repository-custom-properties:
description: "The level of permission to grant the access token to view and edit custom properties for a repository, when allowed by the property. Can be set to 'read' or 'write'."
permission-repository-hooks:
description: "The level of permission to grant the access token to manage the post-receive hooks for a repository. Can be set to 'read' or 'write'."
permission-repository-projects:
description: "The level of permission to grant the access token to manage repository projects, columns, and cards. Can be set to 'read', 'write', or 'admin'."
permission-secret-scanning-alerts:
description: "The level of permission to grant the access token to view and manage secret scanning alerts. Can be set to 'read' or 'write'."
permission-secrets:
description: "The level of permission to grant the access token to manage repository secrets. Can be set to 'read' or 'write'."
permission-security-events:
description: "The level of permission to grant the access token to view and manage security events like code scanning alerts. Can be set to 'read' or 'write'."
permission-single-file:
description: "The level of permission to grant the access token to manage just a single file. Can be set to 'read' or 'write'."
permission-starring:
description: "The level of permission to grant the access token to list and manage repositories a user is starring. Can be set to 'read' or 'write'."
permission-statuses:
description: "The level of permission to grant the access token for commit statuses. Can be set to 'read' or 'write'."
permission-team-discussions:
description: "The level of permission to grant the access token to manage team discussions and related comments. Can be set to 'read' or 'write'."
permission-vulnerability-alerts:
description: "The level of permission to grant the access token to manage Dependabot alerts. Can be set to 'read' or 'write'."
permission-workflows:
description: "The level of permission to grant the access token to update GitHub Actions workflow files. Can be set to 'write'."
# <END GENERATED PERMISSIONS INPUTS>
outputs:
token:
description: "GitHub installation access token"
installation-id:
description: "GitHub App installation ID"
app-slug:
description: "GitHub App slug"
runs:
using: "node20"
main: "dist/main.cjs"
post: "dist/post.cjs"
Action ID: marketplace/azure/setup-kubectl
Author: Unknown
Publisher: azure
Repository: github.com/azure/setup-kubectl
Install a specific version of kubectl binary. Acceptable values are latest or any semantic version string like "v1.15.0"
| Name | Required | Description |
|---|---|---|
version |
Required | Version of kubectl Default: latest |
| Name | Description |
|---|---|
kubectl-path |
Path to the cached kubectl binary |
name: 'Kubectl tool installer'
description: 'Install a specific version of kubectl binary. Acceptable values are latest or any semantic version string like "v1.15.0"'
inputs:
version:
description: 'Version of kubectl'
required: true
default: 'latest'
outputs:
kubectl-path:
description: 'Path to the cached kubectl binary'
branding:
color: 'blue'
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/mislav/ansible-lint
Author: Ansible by Red Hat <info@ansible.com>
Publisher: mislav
Repository: github.com/mislav/ansible-lint
Run Ansible Lint
| Name | Required | Description |
|---|---|---|
args |
Optional | Arguments to be passed to ansible-lint command. |
setup_python |
Optional | If false, this action will not setup python and will instead rely on the already installed python. Default: True |
working_directory |
Optional | The directory where to run ansible-lint from. Default is `github.workspace`. |
---
name: run-ansible-lint
description: Run Ansible Lint
author: Ansible by Red Hat <info@ansible.com>
branding:
icon: shield
color: red
inputs:
args:
description: Arguments to be passed to ansible-lint command.
required: false
default: ""
setup_python:
description: If false, this action will not setup python and will instead rely on the already installed python.
required: false
default: true
working_directory:
description: The directory where to run ansible-lint from. Default is `github.workspace`.
required: false
default: ""
runs:
using: composite
steps:
# Due to GHA limitation, caching works only for files within GITHUB_WORKSPACE
# folder, so we are forced to stick this temporary file inside .git, so it
# will not affect the linted repository.
# https://github.com/actions/toolkit/issues/1035
# https://github.com/actions/setup-python/issues/361
- name: Generate .git/ansible-lint-requirements.txt
shell: bash
run: |
wget --output-document=.git/ansible-lint-requirements.txt https://raw.githubusercontent.com/ansible/ansible-lint/${{ github.action_ref || 'main' }}/.config/requirements-lock.txt
- name: Set up Python
if: inputs.setup_python == 'true'
uses: actions/setup-python@v4
with:
cache: pip
cache-dependency-path: .git/ansible-lint-requirements.txt
python-version: "3.11"
- name: Install ansible-lint
shell: bash
# We need to set the version manually because $GITHUB_ACTION_PATH is not
# a git clone and setuptools-scm would not be able to determine the version.
# git+https://github.com/ansible/ansible-lint@${{ github.action_ref || 'main' }}
# SETUPTOOLS_SCM_PRETEND_VERSION=${{ github.action_ref || 'main' }}
run: |
cd $GITHUB_ACTION_PATH
pip install "ansible-lint[lock] @ git+https://github.com/ansible/ansible-lint@${{ github.action_ref || 'main' }}"
ansible-lint --version
- name: Process inputs
id: inputs
shell: bash
run: |
if [[ -n "${{ inputs.working_directory }}" ]]; then
echo "working_directory=${{ inputs.working_directory }}" >> $GITHUB_OUTPUT
else
echo "working_directory=${{ github.workspace }}" >> $GITHUB_OUTPUT
fi
- name: Run ansible-lint
shell: bash
working-directory: ${{ steps.inputs.outputs.working_directory }}
run: ansible-lint ${{ inputs.args }}
Action ID: marketplace/mheap/github-action-hold-your-horses
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-hold-your-horses
Prevent PRs from being merged too quickly
name: Hold Your Horses
description: Prevent PRs from being merged too quickly
runs:
using: docker
image: Dockerfile
branding:
icon: stop-circle
color: red
Action ID: marketplace/robvanderleek/create-issue-branch
Author: Unknown
Publisher: robvanderleek
Repository: github.com/robvanderleek/create-issue-branch
GitHub action that creates a new branch after assigning an issue.
| Name | Description |
|---|---|
branchName |
Name of the branch that was created |
name: 'Create Issue Branch'
description: 'GitHub action that creates a new branch after assigning an issue.'
branding:
icon: 'activity'
color: 'blue'
outputs:
branchName:
description: 'Name of the branch that was created'
runs:
using: 'node24'
main: 'action-dist/index.js'
Action ID: marketplace/amirisback/video-player
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/video-player
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'FrogoKickStartAndroid'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/JS-DevTools/npm-publish
Author: James Messinger
Publisher: JS-DevTools
Repository: github.com/JS-DevTools/npm-publish
Fast, easy publishing to NPM
| Name | Required | Description |
|---|---|---|
token |
Optional | The NPM access token to use when publishing |
registry |
Optional | The NPM registry URL to use |
package |
Optional | The path to your package or its package.json file |
tag |
Optional | The distribution tag to publish |
access |
Optional | Determines whether the published package should be publicly visible, or restricted to members of your NPM organization. |
provenance |
Optional | Attach provenance statements when publishing. |
strategy |
Optional | Version check and release strategy. If "all" (default), package will be published if its version is simply not yet published. If "upgrade", package will be published if its version is higher than the existing tag, according to semantic versioning. |
ignore-scripts |
Optional | Run npm with the --ignore-scripts flag as a security precaution. Enabled by default. |
dry-run |
Optional | Run npm with the --dry-run flag to avoid actually publishing anything. |
| Name | Description |
|---|---|
id |
The identifier of the published package. If a release was published, format is `${name}@${version}. If no release occurred, will be an empty string. |
type |
The type of version change that occurred on the published tag. If release was an upgrade, will be a semver release type ("major", "minor", ...). If the published tag had no previous version, will be "initial". If version was change was not an upgrade, will be "different". If no release occurred, will be an empty string. |
name |
Name of the package. |
version |
Version of the package. |
old-version |
The previous version on the distribution tag. If there was no previous version on the tag, will be an empty string. |
registry |
The registry used for version checking and publishing. |
tag |
The distribution tag used for version checking and publishing. |
access |
The package access setting used. If configured by the action or package.json, will be "public" or "protected". If not configured for a non-scoped package, will be "public". If not configured for a scoped package, will be "default". |
name: NPM Publish
description: Fast, easy publishing to NPM
author: James Messinger
branding:
color: blue
icon: package
inputs:
token:
description: The NPM access token to use when publishing
required: false
registry:
description: The NPM registry URL to use
required: false
package:
description: The path to your package or its package.json file
required: false
tag:
description: The distribution tag to publish
required: false
access:
description: >
Determines whether the published package should be publicly visible,
or restricted to members of your NPM organization.
required: false
provenance:
description: Attach provenance statements when publishing.
required: false
strategy:
description: >
Version check and release strategy.
If "all" (default), package will be published if its version is simply not yet published.
If "upgrade", package will be published if its version is higher than the existing tag,
according to semantic versioning.
required: false
ignore-scripts:
description: >
Run npm with the --ignore-scripts flag as a security precaution.
Enabled by default.
required: false
dry-run:
description: Run npm with the --dry-run flag to avoid actually publishing anything.
required: false
outputs:
id:
description: >
The identifier of the published package.
If a release was published, format is `${name}@${version}.
If no release occurred, will be an empty string.
type:
description: >
The type of version change that occurred on the published tag.
If release was an upgrade, will be a semver release type ("major", "minor", ...).
If the published tag had no previous version, will be "initial".
If version was change was not an upgrade, will be "different".
If no release occurred, will be an empty string.
name:
description: Name of the package.
version:
description: Version of the package.
old-version:
description: >
The previous version on the distribution tag.
If there was no previous version on the tag, will be an empty string.
registry:
description: The registry used for version checking and publishing.
tag:
description: The distribution tag used for version checking and publishing.
access:
description: >
The package access setting used.
If configured by the action or package.json, will be "public" or "protected".
If not configured for a non-scoped package, will be "public".
If not configured for a scoped package, will be "default".
runs:
using: node24
main: action.js
Action ID: marketplace/mislav/action-shellcheck
Author: Ludeeus <hi@ludeeus.dev>
Publisher: mislav
Repository: github.com/mislav/action-shellcheck
GitHub action for ShellCheck.
| Name | Required | Description |
|---|---|---|
additional_files |
Optional | A space separated list of additional filename to check |
ignore |
Optional | Paths to ignore when running ShellCheck |
ignore_paths |
Optional | Paths to ignore when running ShellCheck |
ignore_names |
Optional | Names to ignore when running ShellCheck |
severity |
Optional | Minimum severity of errors to consider. Options: [error, warning, info, style] |
check_together |
Optional | Run shellcheck on _all_ files at once, instead of one at a time |
scandir |
Optional | Directory to be searched for files. Defaults to . Default: . |
disable_matcher |
Optional | Set to true to skip using problem-matcher Default: false |
format |
Optional | Output format (checkstyle, diff, gcc, json, json1, quiet, tty) Default: gcc |
version |
Optional | Specify a concrete version of ShellCheck to use Default: stable |
| Name | Description |
|---|---|
files |
A list of files with issues |
options |
The options used |
name: "ShellCheck"
author: "Ludeeus <hi@ludeeus.dev>"
description: "GitHub action for ShellCheck."
inputs:
additional_files:
description: "A space separated list of additional filename to check"
required: false
default: ""
ignore:
description: "Paths to ignore when running ShellCheck"
required: false
default: ""
deprecationMessage: "Use ignore_paths or ignore_names instead."
ignore_paths:
description: "Paths to ignore when running ShellCheck"
required: false
default: ""
ignore_names:
description: "Names to ignore when running ShellCheck"
required: false
default: ""
severity:
description: "Minimum severity of errors to consider. Options: [error, warning, info, style]"
required: false
default: ""
check_together:
description: "Run shellcheck on _all_ files at once, instead of one at a time"
required: false
default: ""
scandir:
description: "Directory to be searched for files. Defaults to ."
required: false
default: "."
disable_matcher:
description: "Set to true to skip using problem-matcher"
required: false
default: "false"
deprecationMessage: "There are no problem-matchers, this setting does not do anything."
format:
description: "Output format (checkstyle, diff, gcc, json, json1, quiet, tty)"
required: false
default: "gcc"
version:
description: "Specify a concrete version of ShellCheck to use"
required: false
default: "stable"
outputs:
files:
description: A list of files with issues
value: ${{ steps.check.outputs.filepaths }}
options:
description: The options used
value: ${{ steps.options.outputs.options }}
branding:
icon: "terminal"
color: "gray-dark"
runs:
using: "composite"
steps:
- name: Download shellcheck
shell: bash
env:
INPUT_VERSION: ${{ inputs.version }}
run: |
if [[ "${{ runner.os }}" == "macOS" ]]; then
osvariant="darwin"
else
osvariant="linux"
fi
baseurl="https://github.com/koalaman/shellcheck/releases/download"
curl -Lso "${{ github.action_path }}/sc.tar.xz" \
"${baseurl}/${INPUT_VERSION}/shellcheck-${INPUT_VERSION}.${osvariant}.x86_64.tar.xz"
tar -xf "${{ github.action_path }}/sc.tar.xz" -C "${{ github.action_path }}"
mv "${{ github.action_path }}/shellcheck-${INPUT_VERSION}/shellcheck" \
"${{ github.action_path }}/shellcheck"
- name: Display shellcheck version
shell: bash
run: |
"${{ github.action_path }}/shellcheck" --version
- name: Set options
shell: bash
id: options
env:
INPUT_SEVERITY: ${{ inputs.severity }}
INPUT_FORMAT: ${{ inputs.format }}
run: |
declare -a options
if [[ -n "${INPUT_SEVERITY}" ]]; then
options+=("-S ${INPUT_SEVERITY}")
fi
options+=("--format=${INPUT_FORMAT}")
echo "options=${options[@]}" >> $GITHUB_OUTPUT
- name: Gather excluded paths
shell: bash
id: exclude
env:
INPUT_IGNORE: ${{ inputs.ignore }}
INPUT_IGNORE_PATHS: ${{ inputs.ignore_paths }}
INPUT_IGNORE_NAMES: ${{ inputs.ignore_names }}
run: |
declare -a excludes
set -f # temporarily disable globbing so that globs in input aren't expanded
excludes+=("! -path *./.git/*")
excludes+=("! -path *.go")
excludes+=("! -path */mvnw")
if [[ -n "${INPUT_IGNORE}" ]]; then
for path in ${INPUT_IGNORE}; do
excludes+=("! -path *./$path/*")
excludes+=("! -path */$path/*")
excludes+=("! -path $path")
done
else
for path in ${INPUT_IGNORE_PATHS}; do
excludes+=("! -path *./$path/*")
excludes+=("! -path */$path/*")
excludes+=("! -path $path")
done
fi
for name in ${INPUT_IGNORE_NAMES}; do
excludes+=("! -name $name")
done
echo "excludes=${excludes[@]}" >> $GITHUB_OUTPUT
set +f # re-enable globbing
- name: Gather additional files
shell: bash
id: additional
env:
INPUT_ADDITIONAL_FILES: ${{ inputs.additional_files }}
run: |
declare -a files
for file in ${INPUT_ADDITIONAL_FILES}; do
files+=("-o -name *$file")
done
echo "files=${files[@]}" >> $GITHUB_OUTPUT
- name: Run the check
shell: bash
id: check
env:
INPUT_SCANDIR: ${{ inputs.scandir }}
INPUT_CHECK_TOGETHER: ${{ inputs.check_together }}
INPUT_EXCLUDE_ARGS: ${{ steps.exclude.outputs.excludes }}
INPUT_ADDITIONAL_FILE_ARGS: ${{ steps.additional.outputs.files }}
INPUT_SHELLCHECK_OPTIONS: ${{ steps.options.outputs.options }}
run: |
statuscode=0
declare -a filepaths
shebangregex="^#! */[^ ]*/(env *)?[abk]*sh"
set -f # temporarily disable globbing so that globs in inputs aren't expanded
while IFS= read -r -d '' file; do
filepaths+=("$file")
done < <(find "${INPUT_SCANDIR}" \
${INPUT_EXCLUDE_ARGS} \
-type f \
'(' \
-name '*.bash' \
-o -name '.bashrc' \
-o -name 'bashrc' \
-o -name '.bash_aliases' \
-o -name '.bash_completion' \
-o -name '.bash_login' \
-o -name '.bash_logout' \
-o -name '.bash_profile' \
-o -name 'bash_profile' \
-o -name '*.ksh' \
-o -name 'suid_profile' \
-o -name '*.zsh' \
-o -name '.zlogin' \
-o -name 'zlogin' \
-o -name '.zlogout' \
-o -name 'zlogout' \
-o -name '.zprofile' \
-o -name 'zprofile' \
-o -name '.zsenv' \
-o -name 'zsenv' \
-o -name '.zshrc' \
-o -name 'zshrc' \
-o -name '*.sh' \
-o -path '*/.profile' \
-o -path '*/profile' \
-o -name '*.shlib' \
${INPUT_ADDITIONAL_FILE_ARGS} \
')' \
-print0)
while IFS= read -r -d '' file; do
head -n1 "$file" | grep -Eqs "$shebangregex" || continue
filepaths+=("$file")
done < <(find "${INPUT_SCANDIR}" \
${INPUT_EXCLUDE_ARGS} \
-type f ! -name '*.*' -perm /111 \
-print0)
if [[ -n "${INPUT_CHECK_TOGETHER}" ]]; then
"${{ github.action_path }}/shellcheck" \
${INPUT_SHELLCHECK_OPTIONS} \
"${filepaths[@]}" || statuscode=$?
else
for file in "${filepaths[@]}"; do
"${{ github.action_path }}/shellcheck" \
${INPUT_SHELLCHECK_OPTIONS} \
"$file" || statuscode=$?
done
fi
echo "filepaths=${filepaths[@]}" >> $GITHUB_OUTPUT
echo "statuscode=$statuscode" >> $GITHUB_OUTPUT
set +f # re-enable globbing
- name: Exit action
shell: bash
run: exit ${{steps.check.outputs.statuscode}}
Action ID: marketplace/crowdin/github-action
Author: Unknown
Publisher: crowdin
Repository: github.com/crowdin/github-action
This action allows you to manage and synchronize localization resources with your Crowdin project
| Name | Required | Description |
|---|---|---|
upload_sources |
Optional | Upload sources to Crowdin Default: true |
upload_sources_args |
Optional | Additional arguments which will be passed to the `upload sources` cli command |
upload_translations |
Optional | Upload translations to Crowdin Default: false |
upload_language |
Optional | Use this option to upload translations for a single specified language - Case-Sensitive |
auto_approve_imported |
Optional | Automatically approves uploaded translations Default: false |
import_eq_suggestions |
Optional | Defines whether to add translation if it is equal to source string in Crowdin project Default: false |
upload_translations_args |
Optional | Additional arguments which will be passed to the `upload translations` cli command |
download_sources |
Optional | Defines whether to download source files from the Crowdin project Default: false |
push_sources |
Optional | Push downloaded sources to the branch Default: true |
download_sources_args |
Optional | Additional arguments which will be passed to the `download sources` cli command |
download_translations |
Optional | Defines whether to download translation files from the Crowdin project Default: false |
download_language |
Optional | Use this option to download translations for a single specified language |
download_bundle |
Optional | Download bundle from Crowdin project (by ID) |
skip_untranslated_strings |
Optional | Skip untranslated strings in exported files (does not work with .docx, .html, .md and other document files) Default: false |
skip_untranslated_files |
Optional | Omit downloading not fully translated files Default: false |
export_only_approved |
Optional | Include approved translations only in exported files. If not combined with --skip-untranslated-strings option, strings without approval are fulfilled with the source language Default: false |
push_translations |
Optional | Push downloaded translations to the branch Default: true |
commit_message |
Optional | Commit message for download translations Default: New Crowdin translations by GitHub Action |
localization_branch_name |
Optional | To download translations to the specified version branch Default: l10n_crowdin_action |
create_pull_request |
Optional | Create pull request after pushing to branch Default: true |
pull_request_title |
Optional | The title of the new pull request Default: New Crowdin translations by GitHub Action |
pull_request_body |
Optional | The contents of the pull request |
pull_request_assignees |
Optional | Add up to 10 assignees to the created pull request (separated by comma) |
pull_request_reviewers |
Optional | Usernames of people from whom a review is requested for this pull request (separated by comma) |
pull_request_team_reviewers |
Optional | Team slugs from which a review is requested for this pull request (separated by comma) |
pull_request_labels |
Optional | To add labels for created pull request |
pull_request_base_branch_name |
Optional | Create pull request to specified branch instead of default one |
download_translations_args |
Optional | Additional arguments which will be passed to the `download translations` cli command |
skip_ref_checkout |
Optional | Skip default git checkout on GITHUB_REF Default: false |
crowdin_branch_name |
Optional | Option to upload or download files to the specified version branch in your Crowdin project |
config |
Optional | Option to specify a path to the configuration file, without / at the beginning |
dryrun_action |
Optional | Option to preview the list of managed files Default: false |
github_base_url |
Optional | Option to configure the base URL of GitHub server, if using GHE. Default: github.com |
github_api_base_url |
Optional | Options to configure the base URL of GitHub server for API requests, if using GHE and different from api.github_base_url. |
github_user_name |
Optional | Option to configure GitHub user name on commits. Default: Crowdin Bot |
github_user_email |
Optional | Option to configure GitHub user email on commits. Default: support+bot@crowdin.com |
gpg_private_key |
Optional | GPG private key in ASCII-armored format |
gpg_passphrase |
Optional | GPG Passphrase |
project_id |
Optional | Numerical ID of the project |
token |
Optional | Personal access token required for authentication |
base_url |
Optional | Base URL of Crowdin server for API requests execution |
base_path |
Optional | Path to your project directory on a local machine, without / at the beginning |
source |
Optional | Path to the source files, without / at the beginning |
translation |
Optional | Path to the translation files |
command |
Optional | Crowdin CLI command to execute |
command_args |
Optional | Additional arguments which will be passed to the Crowdin CLI command |
| Name | Description |
|---|---|
pull_request_url |
The URL of the pull request created by the workflow |
pull_request_number |
The number of the pull request created by the workflow |
pull_request_created |
Whether a new pull request was created (true) or an existing one was found (false) |
name: 'crowdin-action'
description: 'This action allows you to manage and synchronize localization resources with your Crowdin project'
branding:
icon: 'refresh-cw'
color: 'green'
inputs:
# upload sources options
upload_sources:
description: 'Upload sources to Crowdin'
default: 'true'
required: false
upload_sources_args:
description: 'Additional arguments which will be passed to the `upload sources` cli command'
default: ''
required: false
# upload translations options
upload_translations:
description: 'Upload translations to Crowdin'
default: 'false'
required: false
upload_language:
description: 'Use this option to upload translations for a single specified language - Case-Sensitive'
required: false
auto_approve_imported:
description: 'Automatically approves uploaded translations'
default: 'false'
required: false
import_eq_suggestions:
description: 'Defines whether to add translation if it is equal to source string in Crowdin project'
default: 'false'
required: false
upload_translations_args:
description: 'Additional arguments which will be passed to the `upload translations` cli command'
default: ''
required: false
# download sources options
download_sources:
description: 'Defines whether to download source files from the Crowdin project'
default: 'false'
required: false
push_sources:
description: 'Push downloaded sources to the branch'
default: 'true'
required: false
download_sources_args:
description: 'Additional arguments which will be passed to the `download sources` cli command'
default: ''
required: false
# download translations options
download_translations:
description: 'Defines whether to download translation files from the Crowdin project'
default: 'false'
required: false
download_language:
description: 'Use this option to download translations for a single specified language'
required: false
download_bundle:
description: 'Download bundle from Crowdin project (by ID)'
default: ''
required: false
skip_untranslated_strings:
description: 'Skip untranslated strings in exported files (does not work with .docx, .html, .md and other document files)'
default: 'false'
required: false
skip_untranslated_files:
description: 'Omit downloading not fully translated files'
default: 'false'
required: false
export_only_approved:
description: 'Include approved translations only in exported files. If not combined with --skip-untranslated-strings option, strings without approval are fulfilled with the source language'
default: 'false'
required: false
push_translations:
description: 'Push downloaded translations to the branch'
default: 'true'
required: false
commit_message:
description: 'Commit message for download translations'
default: 'New Crowdin translations by GitHub Action'
required: false
localization_branch_name:
description: 'To download translations to the specified version branch'
default: 'l10n_crowdin_action'
required: false
create_pull_request:
description: 'Create pull request after pushing to branch'
default: 'true'
required: false
pull_request_title:
description: 'The title of the new pull request'
default: 'New Crowdin translations by GitHub Action'
required: false
pull_request_body:
description: 'The contents of the pull request'
required: false
pull_request_assignees:
description: 'Add up to 10 assignees to the created pull request (separated by comma)'
required: false
pull_request_reviewers:
description: 'Usernames of people from whom a review is requested for this pull request (separated by comma)'
required: false
pull_request_team_reviewers:
description: 'Team slugs from which a review is requested for this pull request (separated by comma)'
required: false
pull_request_labels:
description: 'To add labels for created pull request'
required: false
pull_request_base_branch_name:
description: 'Create pull request to specified branch instead of default one'
required: false
download_translations_args:
description: 'Additional arguments which will be passed to the `download translations` cli command'
default: ''
required: false
skip_ref_checkout:
description: 'Skip default git checkout on GITHUB_REF'
default: 'false'
required: false
# global options
crowdin_branch_name:
description: 'Option to upload or download files to the specified version branch in your Crowdin project'
required: false
config:
description: 'Option to specify a path to the configuration file, without / at the beginning'
required: false
dryrun_action:
description: 'Option to preview the list of managed files'
default: 'false'
required: false
# GitHub (Enterprise) configuration
github_base_url:
description: 'Option to configure the base URL of GitHub server, if using GHE.'
default: 'github.com'
required: false
github_api_base_url:
description: 'Options to configure the base URL of GitHub server for API requests, if using GHE and different from api.github_base_url.'
required: false
github_user_name:
description: 'Option to configure GitHub user name on commits.'
default: 'Crowdin Bot'
required: false
github_user_email:
description: 'Option to configure GitHub user email on commits.'
default: 'support+bot@crowdin.com'
required: false
gpg_private_key:
description: 'GPG private key in ASCII-armored format'
required: false
gpg_passphrase:
description: 'GPG Passphrase'
default: ''
required: false
# config options
project_id:
description: 'Numerical ID of the project'
required: false
token:
description: 'Personal access token required for authentication'
required: false
base_url:
description: 'Base URL of Crowdin server for API requests execution'
required: false
base_path:
description: 'Path to your project directory on a local machine, without / at the beginning'
required: false
source:
description: 'Path to the source files, without / at the beginning'
required: false
translation:
description: 'Path to the translation files'
required: false
# command options
command:
description: 'Crowdin CLI command to execute'
required: false
command_args:
description: 'Additional arguments which will be passed to the Crowdin CLI command'
required: false
outputs:
pull_request_url:
description: 'The URL of the pull request created by the workflow'
pull_request_number:
description: 'The number of the pull request created by the workflow'
pull_request_created:
description: 'Whether a new pull request was created (true) or an existing one was found (false)'
runs:
using: docker
image: 'Dockerfile'
Action ID: marketplace/mheap/review-go-dependency-updates-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/review-go-dependency-updates-action
Set the PR description to contain all release notes for updated libraries
| Name | Required | Description |
|---|---|---|
token |
Optional | A GitHub API token used to call the API Default: ${{ github.token }} |
name: Review Go Dependency Updates
description: Set the PR description to contain all release notes for updated libraries
runs:
using: docker
image: Dockerfile
branding:
icon: slash
color: orange
inputs:
token:
description: "A GitHub API token used to call the API"
default: ${{ github.token }}
required: false
Action ID: marketplace/miloserdow/capistrano-deploy
Author: Unknown
Publisher: miloserdow
Repository: github.com/miloserdow/capistrano-deploy
Deploy an application using capistrano to the given target
| Name | Required | Description |
|---|---|---|
capistrano_commands |
Required | The Capistrano commands to run Default: ["deploy"] |
target |
Optional | Environment where deploy is to be performed to |
deploy_key |
Required | Deployment key used for decryption of SSH RSA private key |
enc_rsa_key_pth |
Optional | Path to SSH private key encrypted with deploy_key Default: config/deploy_id_rsa_enc |
enc_rsa_key_val |
Optional | Value of the SSH private key encrypted with deploy_key |
working-directory |
Optional | The directory from which to run the deploy command |
name: 'Capistrano deploy'
description: 'Deploy an application using capistrano to the given target'
branding:
icon: 'arrow-right-circle'
color: 'black'
inputs:
capistrano_commands:
description: 'The Capistrano commands to run'
required: true
default: '["deploy"]'
target: # string
description: 'Environment where deploy is to be performed to'
required: false # if no param is given, default cap target will be selected
deploy_key:
description: 'Deployment key used for decryption of SSH RSA private key'
required: true
enc_rsa_key_pth:
description: 'Path to SSH private key encrypted with deploy_key'
required: false
default: 'config/deploy_id_rsa_enc'
enc_rsa_key_val:
description: 'Value of the SSH private key encrypted with deploy_key'
required: false
working-directory:
description: 'The directory from which to run the deploy command'
required: false
default: ''
runs:
using: 'node20'
main: 'lib/run.js'
Action ID: marketplace/mheap/github-action-pull-request-milestone-node
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-pull-request-milestone-node
Congratulate people when they hit a certain number of merged pull requests
name: Pull Request Milestone
description: Congratulate people when they hit a certain number of merged pull requests
runs:
using: docker
image: Dockerfile
Action ID: marketplace/rhysd/pnpm-action-setup
Author: Unknown
Publisher: rhysd
Repository: github.com/rhysd/pnpm-action-setup
Install pnpm package manager
| Name | Required | Description |
|---|---|---|
version |
Optional | Version of pnpm to install |
dest |
Optional | Where to store pnpm files Default: ~/setup-pnpm |
run_install |
Optional | If specified, run `pnpm install` Default: null |
package_json_file |
Optional | File path to the package.json to read "packageManager" configuration Default: package.json |
standalone |
Optional | When set to true, @pnpm/exe, which is a Node.js bundled package, will be installed, enabling using pnpm without Node.js. Default: false |
name: Setup pnpm
description: Install pnpm package manager
branding:
icon: package
color: orange
inputs:
version:
description: Version of pnpm to install
required: false
dest:
description: Where to store pnpm files
required: false
default: ~/setup-pnpm
run_install:
description: If specified, run `pnpm install`
required: false
default: 'null'
package_json_file:
description: File path to the package.json to read "packageManager" configuration
required: false
default: 'package.json'
standalone:
description: When set to true, @pnpm/exe, which is a Node.js bundled package, will be installed, enabling using pnpm without Node.js.
required: false
default: 'false'
runs:
using: node20
main: dist/index.js
post: dist/index.js
Action ID: marketplace/pgrimaud/action-shopify-cli
Author: Pierre Grimaud
Publisher: pgrimaud
Repository: github.com/pgrimaud/action-shopify-cli
Deploy Shopify theme with Shopify CLI
name: 'Shopify CLI theme deploy'
author: 'Pierre Grimaud'
description: 'Deploy Shopify theme with Shopify CLI'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'shopping-bag'
color: 'green'
Action ID: marketplace/mszostok/codeowners-validator
Author: szostok.mateusz@gmail.com
Publisher: mszostok
Repository: github.com/mszostok/codeowners-validator
GitHub action to ensure the correctness of your CODEOWNERS file.
| Name | Required | Description |
|---|---|---|
github_access_token |
Optional | The GitHub access token. Instruction for creating a token can be found here: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/#creating-a-token. If not provided then validating owners functionality could not work properly, e.g. you can reach the API calls quota or if you are setting GitHub Enterprise base URL then an unauthorized error can occur. |
github_app_id |
Optional | Github App ID for authentication. This replaces the GITHUB_ACCESS_TOKEN. Instruction for creating a Github App can be found here: https://github.com/mszostok/codeowners-validator/blob/main/docs/gh-token.md |
github_app_installation_id |
Optional | Github App Installation ID. Required when GITHUB_APP_ID is set. |
github_app_private_key |
Optional | Github App private key in PEM format. Required when GITHUB_APP_ID is set. |
github_base_url |
Optional | The GitHub base URL for API requests. Defaults to the public GitHub API, but can be set to a domain endpoint to use with GitHub Enterprise. Default: https://api.github.com/ |
github_upload_url |
Optional | The GitHub upload URL for uploading files. It is taken into account only when the GITHUB_BASE_URL is also set. If only the GITHUB_BASE_URL is provided then this parameter defaults to the GITHUB_BASE_URL value. Default: https://uploads.github.com/ |
experimental_checks |
Optional | The comma-separated list of experimental checks that should be executed. By default, all experimental checks are turned off. Possible values: notowned. |
checks |
Optional | The list of checks that will be executed. By default, all checks are executed. Possible values: files,owners,duppatterns,syntax |
repository_path |
Optional | The repository path in which CODEOWNERS file should be validated. Default: . |
check_failure_level |
Optional | Defines the level on which the application should treat check issues as failures. Defaults to warning, which treats both errors and warnings as failures, and exits with error code 3. Possible values are error and warning. Default: warning |
not_owned_checker_skip_patterns |
Optional | The comma-separated list of patterns that should be ignored by not-owned-checker. For example, you can specify * and as a result, the * pattern from the CODEOWNERS file will be ignored and files owned by this pattern will be reported as unowned unless a later specific pattern will match that path. It's useful because often we have default owners entry at the begging of the CODOEWNERS file, e.g. * @global-owner1 @global-owner2 |
owner_checker_repository |
Optional | The owner and repository name. For example, gh-codeowners/codeowners-samples. Used to check if GitHub team is in the given organization and has permission to the given repository. Default: ${{ github.repository }} |
owner_checker_ignored_owners |
Optional | The comma-separated list of owners that should not be validated. Example: @owner1,@owner2,@org/team1,example@email.com. |
owner_checker_allow_unowned_patterns |
Optional | Specifies whether CODEOWNERS may have unowned files. For example, `/infra/oncall-rotator/oncall-config.yml` doesn't have owner and this is not reported. Default: true |
owner_checker_owners_must_be_teams |
Optional | Specifies whether only teams are allowed as owners of files. Default: false |
not_owned_checker_subdirectories |
Optional | Only check listed subdirectories for CODEOWNERS ownership that don't have owners. |
not_owned_checker_trust_workspace |
Optional | Specifies whether the repository path should be marked as safe. See: https://github.com/actions/checkout/issues/766 Default: true |
name: "GitHub CODEOWNERS Validator"
description: "GitHub action to ensure the correctness of your CODEOWNERS file."
author: "szostok.mateusz@gmail.com"
inputs:
github_access_token:
description: "The GitHub access token. Instruction for creating a token can be found here: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/#creating-a-token. If not provided then validating owners functionality could not work properly, e.g. you can reach the API calls quota or if you are setting GitHub Enterprise base URL then an unauthorized error can occur."
required: false
github_app_id:
description: "Github App ID for authentication. This replaces the GITHUB_ACCESS_TOKEN. Instruction for creating a Github App can be found here: https://github.com/mszostok/codeowners-validator/blob/main/docs/gh-token.md"
required: false
github_app_installation_id:
description: "Github App Installation ID. Required when GITHUB_APP_ID is set."
required: false
github_app_private_key:
description: "Github App private key in PEM format. Required when GITHUB_APP_ID is set."
required: false
github_base_url:
description: "The GitHub base URL for API requests. Defaults to the public GitHub API, but can be set to a domain endpoint to use with GitHub Enterprise. Default: https://api.github.com/"
required: false
github_upload_url:
description: "The GitHub upload URL for uploading files. It is taken into account only when the GITHUB_BASE_URL is also set. If only the GITHUB_BASE_URL is provided then this parameter defaults to the GITHUB_BASE_URL value. Default: https://uploads.github.com/"
required: false
experimental_checks:
description: "The comma-separated list of experimental checks that should be executed. By default, all experimental checks are turned off. Possible values: notowned."
default: ""
required: false
checks:
description: "The list of checks that will be executed. By default, all checks are executed. Possible values: files,owners,duppatterns,syntax"
required: false
default: ""
repository_path:
description: "The repository path in which CODEOWNERS file should be validated."
required: false
default: "."
check_failure_level:
description: "Defines the level on which the application should treat check issues as failures. Defaults to warning, which treats both errors and warnings as failures, and exits with error code 3. Possible values are error and warning. Default: warning"
required: false
not_owned_checker_skip_patterns:
description: "The comma-separated list of patterns that should be ignored by not-owned-checker. For example, you can specify * and as a result, the * pattern from the CODEOWNERS file will be ignored and files owned by this pattern will be reported as unowned unless a later specific pattern will match that path. It's useful because often we have default owners entry at the begging of the CODOEWNERS file, e.g. * @global-owner1 @global-owner2"
required: false
owner_checker_repository:
description: "The owner and repository name. For example, gh-codeowners/codeowners-samples. Used to check if GitHub team is in the given organization and has permission to the given repository."
required: false
default: "${{ github.repository }}"
owner_checker_ignored_owners:
description: "The comma-separated list of owners that should not be validated. Example: @owner1,@owner2,@org/team1,example@email.com."
required: false
owner_checker_allow_unowned_patterns:
description: "Specifies whether CODEOWNERS may have unowned files. For example, `/infra/oncall-rotator/oncall-config.yml` doesn't have owner and this is not reported."
default: "true"
required: false
owner_checker_owners_must_be_teams:
description: "Specifies whether only teams are allowed as owners of files."
default: "false"
required: false
not_owned_checker_subdirectories:
description: "Only check listed subdirectories for CODEOWNERS ownership that don't have owners."
required: false
not_owned_checker_trust_workspace:
description: "Specifies whether the repository path should be marked as safe. See: https://github.com/actions/checkout/issues/766"
required: false
default: "true"
runs:
using: 'docker'
image: 'docker://ghcr.io/mszostok/codeowners-validator:v0.7.4'
env:
ENVS_PREFIX: "INPUT"
branding:
icon: "shield"
color: "gray-dark"
Action ID: marketplace/amirisback/nutrition-framework
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/nutrition-framework
Final Project for Bachelor Degree, Full and Clear Documentation, Have Empty View :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Nutrition Framework'
description: 'Final Project for Bachelor Degree, Full and Clear Documentation, Have Empty View :books:'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/FranzDiebold/html-to-pdf-action
Author: fifsky@gmail.com
Publisher: FranzDiebold
Repository: github.com/FranzDiebold/html-to-pdf-action
Converts HTML file to PDF.
| Name | Required | Description |
|---|---|---|
htmlFile |
Required | html file path |
outputFile |
Required | output file path |
pdfOptions |
Optional | PDF options as described here:
https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pagepdfoptions -
Needs to be in JSON format, e.g. `{"format": "A4", "pageRanges": "1"}`
Default: {} |
name: "HTML to PDF"
description: "Converts HTML file to PDF."
author: "fifsky@gmail.com"
inputs:
htmlFile:
description: "html file path"
required: true
outputFile:
description: "output file path"
required: true
pdfOptions:
description: |
PDF options as described here:
https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pagepdfoptions -
Needs to be in JSON format, e.g. `{"format": "A4", "pageRanges": "1"}`
required: false
default: "{}"
runs:
using: "docker"
image: "Dockerfile"
branding:
color: "blue"
icon: "file"
Action ID: marketplace/ammaraskar/sphinx-action
Author: Ammar Askar
Publisher: ammaraskar
Repository: github.com/ammaraskar/sphinx-action
Builds documentation using Sphinx
| Name | Required | Description |
|---|---|---|
docs-folder |
Required | The folder containing your sphinx docs. Default: docs/ |
build-command |
Optional | The command used to build your documentation. Default: make html |
pre-build-command |
Optional | Run before the build command, you can use this to install system level dependencies, for example with "apt-get update -y && apt-get install -y perl" |
name: 'Sphinx Build'
description: 'Builds documentation using Sphinx'
author: 'Ammar Askar'
branding:
icon: 'book'
color: 'yellow'
inputs:
docs-folder:
description:
The folder containing your sphinx docs.
required: true
default: "docs/"
build-command:
description:
The command used to build your documentation.
required: false
default: make html
pre-build-command:
description:
Run before the build command, you can use this to install system level
dependencies, for example with
"apt-get update -y && apt-get install -y perl"
required: false
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/appleboy/git-push-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/git-push-action
push changes to a remote git repository.
| Name | Required | Description |
|---|---|---|
author_name |
Optional | git author name Default: GitHub Actions |
author_email |
Optional | git author email Default: 41898282+github-actions[bot]@users.noreply.github.com |
netrc_machine |
Optional | netrc machine |
netrc_username |
Optional | netrc username |
netrc_password |
Optional | netrc password |
ssh_key |
Optional | private ssh key |
remote |
Required | url of the remote repo |
remote_name |
Optional | name of the remote repo Default: deploy |
branch |
Optional | name of remote branch Default: master |
local_branch |
Optional | name of local branch Default: HEAD |
path |
Optional | path to git repo |
force |
Optional | force push to remote |
followtags |
Optional | push to remote with tags |
skip_verify |
Optional | skip ssl verification |
commit |
Optional | commit dirty changes |
commit_message |
Optional | commit message |
tag |
Optional | tag to add to the commit |
empty_commit |
Optional | allow empty commit |
no_verify |
Optional | bypasses the pre-commit and commit-msg hooks |
rebase |
Optional | rebase local branch on top of remote branch |
name: 'Git Push Changes'
description: 'push changes to a remote git repository.'
author: 'Bo-Yi Wu'
inputs:
author_name:
description: 'git author name'
default: 'GitHub Actions'
author_email:
description: 'git author email'
default: '41898282+github-actions[bot]@users.noreply.github.com'
netrc_machine:
description: 'netrc machine'
netrc_username:
description: 'netrc username'
netrc_password:
description: 'netrc password'
ssh_key:
description: 'private ssh key'
remote:
description: 'url of the remote repo'
required: true
remote_name:
description: 'name of the remote repo'
default: 'deploy'
branch:
description: 'name of remote branch'
default: 'master'
local_branch:
description: 'name of local branch'
default: 'HEAD'
path:
description: 'path to git repo'
force:
description: 'force push to remote'
followtags:
description: 'push to remote with tags'
skip_verify:
description: 'skip ssl verification'
commit:
description: 'commit dirty changes'
commit_message:
description: 'commit message'
tag:
description: 'tag to add to the commit'
empty_commit:
description: 'allow empty commit'
no_verify:
description: 'bypasses the pre-commit and commit-msg hooks'
rebase:
description: 'rebase local branch on top of remote branch'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'terminal'
color: 'gray-dark'
Action ID: marketplace/ravsamhq/notify-slack-action
Author: RavSam Web Solutions
Publisher: ravsamhq
Repository: github.com/ravsamhq/notify-slack-action
Send Github Actions workflow status notifications to Slack
| Name | Required | Description |
|---|---|---|
status |
Required | Job Status |
token |
Optional | Github Token for accessing workflow url |
notification_title |
Optional | Specify on the notification message title Default: New Github Action Run |
message_format |
Optional | Specify on the notification message format Default: {emoji} *{workflow}* {status_message} in <{repo_url}|{repo}@{branch}> on <{commit_url}|{commit_sha}> |
footer |
Optional | Specify the footer of the message Default: <{run_url}|View Run> | Developed by <https://www.ravsam.in|RavSam> |
notify_when |
Optional | Specify on which events a slack notification is sent Default: success,failure,cancelled,warnings,skipped |
mention_users |
Optional | Specify the slack IDs of users you want to mention |
mention_users_when |
Optional | Specify on which events you want to mention the users Default: success,failure,cancelled,warnings,skipped |
mention_groups |
Optional | Specify the slack IDs of groups you want to mention |
mention_groups_when |
Optional | Specify on which events you want to mention the groups Default: success,failure,cancelled,warnings,skipped |
icon_success |
Optional | Specify on icon to be used when event is success Default: :heavy_check_mark: |
icon_failure |
Optional | Specify on icon to be used when event is failure Default: :x: |
icon_cancelled |
Optional | Specify on icon to be used when event is cancelled Default: :x: |
icon_warnings |
Optional | Specify on icon to be used when event is warnings Default: :large_orange_diamond: |
icon_skipped |
Optional | Specify on icon to be used when event is skipped Default: :fast_forward: |
name: Notify Slack Action
description: Send Github Actions workflow status notifications to Slack
author: RavSam Web Solutions
inputs:
status:
description: Job Status
required: true
token:
description: Github Token for accessing workflow url
required: false
default: ""
notification_title:
description: Specify on the notification message title
required: false
default: "New Github Action Run"
message_format:
description: Specify on the notification message format
required: false
default: "{emoji} *{workflow}* {status_message} in <{repo_url}|{repo}@{branch}> on <{commit_url}|{commit_sha}>"
footer:
description: Specify the footer of the message
required: false
default: "<{run_url}|View Run> | Developed by <https://www.ravsam.in|RavSam>"
notify_when:
description: Specify on which events a slack notification is sent
required: false
default: "success,failure,cancelled,warnings,skipped"
mention_users:
description: Specify the slack IDs of users you want to mention
required: false
default: ""
mention_users_when:
description: Specify on which events you want to mention the users
required: false
default: "success,failure,cancelled,warnings,skipped"
mention_groups:
description: Specify the slack IDs of groups you want to mention
required: false
default: ""
mention_groups_when:
description: Specify on which events you want to mention the groups
required: false
default: "success,failure,cancelled,warnings,skipped"
icon_success:
description: Specify on icon to be used when event is success
required: false
default: ":heavy_check_mark:"
icon_failure:
description: Specify on icon to be used when event is failure
required: false
default: ":x:"
icon_cancelled:
description: Specify on icon to be used when event is cancelled
required: false
default: ":x:"
icon_warnings:
description: Specify on icon to be used when event is warnings
required: false
default: ":large_orange_diamond:"
icon_skipped:
description: Specify on icon to be used when event is skipped
required: false
default: ":fast_forward:"
branding:
icon: send
color: blue
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/DeLaGuardo/clojure-lint-action
Author: DeLaGuardo
Publisher: DeLaGuardo
Repository: github.com/DeLaGuardo/clojure-lint-action
Lint your clojure code with clj-kondo in parallel to your builds
| Name | Required | Description |
|---|---|---|
clj-kondo-args |
Required | Arguments to be passed to clj-kondo Default: --lint src |
check-name |
Optional | Check name will be visible in Github Checks list Default: clj-kondo check |
github_token |
Required | Github token to report linter results back to check |
name: 'clj-kondo checks'
description: 'Lint your clojure code with clj-kondo in parallel to your builds'
author: 'DeLaGuardo'
branding:
icon: 'gift'
color: 'blue'
inputs:
clj-kondo-args:
description: 'Arguments to be passed to clj-kondo'
required: true
default: '--lint src'
check-name:
description: 'Check name will be visible in Github Checks list'
default: 'clj-kondo check'
github_token:
description: 'Github token to report linter results back to check'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
env:
CHECK_NAME: ${{ inputs.check-name }}
LINT_ARGS: ${{ inputs.clj-kondo-args }}
Action ID: marketplace/andstor/image-diff-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/image-diff-action
GitHub Action for producing image diffs
| Name | Required | Description |
|---|---|---|
img_1 |
Required | The path to the image file used as base image. |
img_2 |
Required | The path to the image file to be compared against the base image. |
diff_color |
Optional | Color for the diff highlighting. Default: 255, 255, 0 |
ignore_mask |
Optional | The path to the image file to be compared against the base image. |
| Name | Description |
|---|---|
img-diff |
The image diff. |
mismatch |
The mismatch percentage |
name: 'Image Diff Action'
description: 'GitHub Action for producing image diffs'
author: 'André Storhaug'
branding:
icon: 'image'
color: 'purple'
inputs:
img_1:
description: 'The path to the image file used as base image.'
required: true
img_2:
description: 'The path to the image file to be compared against the base image.'
required: true
diff_color:
description: 'Color for the diff highlighting.'
default: '255, 255, 0'
required: false
ignore_mask:
description: 'The path to the image file to be compared against the base image.'
required: false
outputs:
img-diff:
description: 'The image diff.'
mismatch:
description: 'The mismatch percentage'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/amirisback/android-granting-daily-reward
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-granting-daily-reward
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/IaC-Telemetry
Author: Unknown
Publisher: azure
Repository: github.com/azure/IaC-Telemetry
Send IaC Telemetry
| Name | Required | Description |
|---|---|---|
event-name |
Required | Name of the event: plan-success, plan-failure, apply-success, apply-failure |
directory |
Required | Directory to run Terraform commands |
name: "IaC Telemetry"
description: "Send IaC Telemetry"
inputs:
event-name:
description: "Name of the event: plan-success, plan-failure, apply-success, apply-failure"
required: true
directory:
description: "Directory to run Terraform commands"
required: true
runs:
using: "composite"
steps:
- run: |
curl https://dc.services.visualstudio.com/v2/track -d @- <<EOF
{
"name": "EventData",
"time": "$(date --utc +%FT%T.%3NZ)",
"iKey": "b565e06f-9e18-4198-a62b-764a48d4e187",
"data": {
"baseType": "EventData",
"baseData": {
"name": "${{ inputs.event-name }}",
"properties": {
"workflow": "${{ github.workflow }}",
"run_number": "${{ github.run_number }}",
"actor": "${{ github.actor }}",
"repository": "${{ github.repository }}",
"event_name": "${{ github.event_name }}",
"directory": "${{ inputs.directory }}"
}
}
}
}
EOF
shell: bash
Action ID: marketplace/peter-evans/autopep8
Author: Peter Evans
Publisher: peter-evans
Repository: github.com/peter-evans/autopep8
Automatically formats Python code to conform to the PEP 8 style guide.
| Name | Required | Description |
|---|---|---|
args |
Required | Arguments to pass to autopep8 Default: --help |
| Name | Description |
|---|---|
exit-code |
The exit code output by autopep8 |
name: 'autopep8'
author: 'Peter Evans'
description: 'Automatically formats Python code to conform to the PEP 8 style guide.'
inputs:
args:
description: 'Arguments to pass to autopep8'
required: true
default: '--help'
outputs:
exit-code:
description: 'The exit code output by autopep8'
runs:
using: 'docker'
image: 'docker://peterevans/autopep8:2.0.0'
args:
- ${{ inputs.args }}
branding:
icon: 'code'
color: 'blue'
Action ID: marketplace/actions/github-script
Author: GitHub
Publisher: actions
Repository: github.com/actions/github-script
Run simple scripts using the GitHub client
| Name | Required | Description |
|---|---|---|
script |
Required | The script to run |
github-token |
Optional | The GitHub token used to create an authenticated client Default: ${{ github.token }} |
debug |
Optional | Whether to tell the GitHub client to log details of its requests. true or false. Default is to run in debug mode when the GitHub Actions step debug logging is turned on. Default: ${{ runner.debug == '1' }} |
user-agent |
Optional | An optional user-agent string Default: actions/github-script |
previews |
Optional | A comma-separated list of GraphQL API previews to accept |
result-encoding |
Optional | Either "string" or "json" (default "json")—how the result will be encoded Default: json |
retries |
Optional | The number of times to retry a request Default: 0 |
retry-exempt-status-codes |
Optional | A comma separated list of status codes that will NOT be retried e.g. "400,500". No effect unless `retries` is set Default: 400,401,403,404,422 |
base-url |
Optional | An optional GitHub REST API URL to connect to a different GitHub instance. For example, https://my.github-enterprise-server.com/api/v3 |
| Name | Description |
|---|---|
result |
The return value of the script, stringified with `JSON.stringify` |
name: GitHub Script
author: GitHub
description: Run simple scripts using the GitHub client
branding:
color: blue
icon: code
inputs:
script:
description: The script to run
required: true
github-token:
description: The GitHub token used to create an authenticated client
default: ${{ github.token }}
required: false
debug:
description: Whether to tell the GitHub client to log details of its requests. true or false. Default is to run in debug mode when the GitHub Actions step debug logging is turned on.
default: ${{ runner.debug == '1' }}
user-agent:
description: An optional user-agent string
default: actions/github-script
previews:
description: A comma-separated list of GraphQL API previews to accept
result-encoding:
description: Either "string" or "json" (default "json")—how the result will be encoded
default: json
retries:
description: The number of times to retry a request
default: "0"
retry-exempt-status-codes:
description: A comma separated list of status codes that will NOT be retried e.g. "400,500". No effect unless `retries` is set
default: 400,401,403,404,422 # from https://github.com/octokit/plugin-retry.js/blob/9a2443746c350b3beedec35cf26e197ea318a261/src/index.ts#L14
base-url:
description: An optional GitHub REST API URL to connect to a different GitHub instance. For example, https://my.github-enterprise-server.com/api/v3
required: false
outputs:
result:
description: The return value of the script, stringified with `JSON.stringify`
runs:
using: node24
main: dist/index.js
Action ID: marketplace/azure/app-configuration-deploy-feature-flags
Author: Microsoft
Publisher: azure
Repository: github.com/azure/app-configuration-deploy-feature-flags
Deploy feature flags from a GitHub repository to Azure App Configuration
| Name | Required | Description |
|---|---|---|
path |
Required | Source file(s) containing feature flags. Multiple files can be specified in list or glob format. Example: **/*.json or FeatureFlags/*.json. Use ! to exclude files. |
app-configuration-endpoint |
Optional | Destination endpoint for the Azure App Configuration store. |
label |
Optional | Azure App Configuration label to apply to the feature flags. If not specficed, the default is no label. Default: None |
operation |
Optional | Possible values: validate or deploy - deploy by default. validate: only validates the configuration file. deploy: deploys the feature flags to Azure App Configuration. Default: deploy |
strict |
Optional | If strict, the sync operation deletes feature flags not found in the config file. Choices: true, false. |
name: 'Azure App Configuration Deploy Feature Flags'
description:
'Deploy feature flags from a GitHub repository to Azure App Configuration'
author: 'Microsoft'
branding:
icon: 'heart'
color: 'red'
inputs:
path:
description:
'Source file(s) containing feature flags. Multiple files can be specified
in list or glob format. Example: **/*.json or FeatureFlags/*.json. Use !
to exclude files.'
required: true
app-configuration-endpoint:
description: 'Destination endpoint for the Azure App Configuration store.'
required: false
label:
description:
'Azure App Configuration label to apply to the feature flags. If not
specficed, the default is no label.'
required: false
default: None
operation: # Validate the configuration file only
description:
'Possible values: validate or deploy - deploy by default. validate: only
validates the configuration file. deploy: deploys the feature flags to
Azure App Configuration.'
required: false
default: 'deploy'
strict:
description:
'If strict, the sync operation deletes feature flags not found in the
config file. Choices: true, false.'
required: false
runs:
using: node20
main: dist/index.js
Action ID: marketplace/julia-actions/julia-fix-doctests
Author: Eric P. Hanson and contributors
Publisher: julia-actions
Repository: github.com/julia-actions/julia-fix-doctests
Run `doctest(MyPkg; fix=true)`
| Name | Required | Description |
|---|---|---|
package_path |
Optional | Path to the directory of the package. Only required for subdirectory packages. |
project |
Optional | Value passed to the --project flag. The default value is "docs" Default: docs |
name: 'julia-fix-doctests'
description: 'Run `doctest(MyPkg; fix=true)`'
author: 'Eric P. Hanson and contributors'
branding:
icon: 'package'
color: 'green'
inputs:
package_path:
description: 'Path to the directory of the package. Only required for subdirectory packages.'
default: ''
required: false
project:
description: 'Value passed to the --project flag. The default value is "docs"'
default: 'docs'
required: false
runs:
using: 'composite'
steps:
- name: Fix doctests
run: julia --project=${{ inputs.project }} "$GITHUB_ACTION_PATH"/fix_doctests.jl
shell: bash
env:
INPUT_PACKAGE_PATH: ${{ inputs.package_path }}
INPUT_PROJECT: ${{ inputs.project }}
Action ID: marketplace/tgymnich/fork-sync
Author: tgymnich
Publisher: tgymnich
Repository: github.com/tgymnich/fork-sync
Keep your fork up to date
| Name | Required | Description |
|---|---|---|
owner |
Optional | Owner of the forked repository |
repo |
Optional | Name of the forked repository |
token |
Optional | Token for the github API Default: ${{ github.token }} |
head |
Optional | Branch to track Default: master |
base |
Optional | Branch to keep updated Default: master |
merge_method |
Optional | Merge method to use Default: merge |
pr_title |
Optional | The title of the pull request Default: Fork Sync: Update from parent repository |
pr_message |
Optional | The message in the pull request |
ignore_fail |
Optional | Ignore Exceptions |
auto_approve |
Optional | Automatically approve pull request before merge |
auto_merge |
Optional | Automatically merge the pull request Default: True |
retries |
Optional | Retry count Default: 4 |
retry_after |
Optional | Delay amount between retries Default: 60 |
name: 'Fork Sync'
description: 'Keep your fork up to date'
author: 'tgymnich'
branding:
icon: 'git-branch'
color: 'black'
inputs:
owner:
description: 'Owner of the forked repository'
required: false
repo:
description: 'Name of the forked repository'
required: false
token:
description: 'Token for the github API'
required: false
default: ${{ github.token }}
head:
description: 'Branch to track'
required: false
default: 'master'
base:
description: 'Branch to keep updated'
required: false
default: 'master'
merge_method:
description: 'Merge method to use'
required: false
default: 'merge'
pr_title:
description: 'The title of the pull request'
required: false
default: 'Fork Sync: Update from parent repository'
pr_message:
description: 'The message in the pull request'
required: false
ignore_fail:
description: 'Ignore Exceptions'
required: false
default: false
auto_approve:
description: 'Automatically approve pull request before merge'
required: false
default: false
auto_merge:
description: 'Automatically merge the pull request'
required: false
default: true
retries:
description: 'Retry count'
required: false
default: 4
retry_after:
description: 'Delay amount between retries'
required: false
default: 60
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/mheap/github-action-pull-request-milestone-bash
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-pull-request-milestone-bash
Congratulate people when they hit a certain number of merged pull requests
name: Pull Request Milestone
description: Congratulate people when they hit a certain number of merged pull requests
runs:
using: docker
image: Dockerfile
Action ID: marketplace/peter-evans/slash-command-dispatch
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/slash-command-dispatch
Facilitates "ChatOps" by creating dispatch events for slash commands
| Name | Required | Description |
|---|---|---|
token |
Required | A repo scoped GitHub Personal Access Token. |
reaction-token |
Optional | An optional GitHub token to use for reactions. Default: ${{ github.token }} |
reactions |
Optional | Add reactions to comments containing commands. Default: True |
commands |
Required | A comma or newline separated list of commands. |
permission |
Optional | The repository permission level required by the user to dispatch commands. Default: write |
issue-type |
Optional | The issue type required for commands. Default: both |
allow-edits |
Optional | Allow edited comments to trigger command dispatches. |
repository |
Optional | The full name of the repository to send the dispatch events. Default: ${{ github.repository }} |
event-type-suffix |
Optional | The repository dispatch event type suffix for the commands. Default: -command |
static-args |
Optional | A comma or newline separated list of arguments that will be dispatched with every command. |
dispatch-type |
Optional | The dispatch type; `repository` or `workflow`. Default: repository |
config |
Optional | JSON configuration for commands. |
config-from-file |
Optional | JSON configuration from a file for commands. |
| Name | Description |
|---|---|
error-message |
Validation errors when using `workflow` dispatch. |
name: 'Slash Command Dispatch'
description: 'Facilitates "ChatOps" by creating dispatch events for slash commands'
inputs:
token:
description: 'A repo scoped GitHub Personal Access Token.'
required: true
reaction-token:
description: 'An optional GitHub token to use for reactions.'
default: ${{ github.token }}
reactions:
description: 'Add reactions to comments containing commands.'
default: true
commands:
description: 'A comma or newline separated list of commands.'
required: true
permission:
description: 'The repository permission level required by the user to dispatch commands.'
default: write
issue-type:
description: 'The issue type required for commands.'
default: both
allow-edits:
description: 'Allow edited comments to trigger command dispatches.'
default: false
repository:
description: 'The full name of the repository to send the dispatch events.'
default: ${{ github.repository }}
event-type-suffix:
description: 'The repository dispatch event type suffix for the commands.'
default: -command
static-args:
description: 'A comma or newline separated list of arguments that will be dispatched with every command.'
dispatch-type:
description: 'The dispatch type; `repository` or `workflow`.'
default: repository
config:
description: 'JSON configuration for commands.'
config-from-file:
description: 'JSON configuration from a file for commands.'
outputs:
error-message:
description: 'Validation errors when using `workflow` dispatch.'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'target'
color: 'gray-dark'
Action ID: marketplace/vsoch/split-list-action
Author: Unknown
Publisher: vsoch
Repository: github.com/vsoch/split-list-action
Generate a hashed list of identifiers to evenly split up.
| Name | Required | Description |
|---|---|---|
ids_file |
Required | Text file with identifiers to split, one per line. |
outfile |
Optional | output file to write to (otherwise prints to screen) |
random_split |
Optional | change default for random split N, if using |
calendar_split |
Required | split identifiers by letter matched to month (and return current day) |
name: "split-list-action"
description: "Generate a hashed list of identifiers to evenly split up."
branding:
icon: 'activity'
color: 'green'
inputs:
ids_file:
description: Text file with identifiers to split, one per line.
required: true
outfile:
description: output file to write to (otherwise prints to screen)
required: false
random_split:
description: change default for random split N, if using
required: false
calendar_split:
description: split identifiers by letter matched to month (and return current day)
required: true
default: false
runs:
using: "composite"
steps:
- name: Split
env:
ids_file: ${{ inputs.ids_file }}
outfile: ${{ inputs.outfile }}
calendar_split: ${{ inputs.calendar_split }}
random_split: ${{ inputs.random_split }}
action_path: ${{ github.action_path }}
run: |
cmd="python ${action_path}/generate.py"
if [[ "${outfile}" != "" ]]; then
cmd="${cmd} --outfile ${outfile}"
fi
if [[ "${keep_identifiers}" == "true" ]]; then
cmd="$cmd --keep-identifiers"
fi
if [[ "${calendar_split}" == "true" ]]; then
cmd="$cmd --calendar-split"
fi
if [[ "${calendar_split}" == "true" ]] && [[ "${random_split}" != "" ]]; then
printf "You can only do one of `random_split` or `calendar_split`\n"
exit 1;
fi
if [[ "${random_split}" != "" ]]; then
cmd="$cmd --random-split ${random_split}"
fi
cmd="${cmd} ${ids_file}"
printf "${cmd}\n"
$cmd
shell: bash
Action ID: marketplace/azure/trusted-signing-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/trusted-signing-action
Sign your files with Trusted Signing.
| Name | Required | Description |
|---|---|---|
azure-tenant-id |
Optional | The Azure Active Directory tenant (directory) ID. |
azure-client-id |
Optional | The client (application) ID of an App Registration in the tenant. |
azure-client-secret |
Optional | A client secret that was generated for the App Registration. |
azure-client-certificate-path |
Optional | A path to certificate and private key pair in PEM or PFX format, which can authenticate the App Registration. |
azure-client-send-certificate-chain |
Optional | Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to `true` or `1`, authentication requests include the x5c header. |
azure-username |
Optional | The username, also known as upn, of an Azure Active Directory user account. |
azure-password |
Optional | The password of the Azure Active Directory user account. Note this does not support accounts with MFA enabled. |
endpoint |
Required | The Trusted Signing Account endpoint. The URI value must have a URI that aligns to the region your Trusted Signing Account and Certificate Profile you are specifying were created in during the setup of these resources. |
code-signing-account-name |
Optional | The Code Signing Account name. |
trusted-signing-account-name |
Optional | The Trusted Signing Account name. |
certificate-profile-name |
Required | The Certificate Profile name. |
files |
Optional | A comma or newline separated list of absolute paths to the files being signed. Can be combined with the files-folder and file-catalog inputs. |
files-folder |
Optional | The folder containing files to be signed. Can be combined with the files and file-catalog inputs. |
files-folder-filter |
Optional | A comma separated list of file extensions that determines which types of files will be signed in the folder specified by the files-folder input. E.g., 'dll,exe,msix'. Any file type not included in this list will not be signed. If this input is not used, all files in the folder will be signed. Supports wildcards for matching multiple file names with a pattern. |
files-folder-recurse |
Optional | A boolean value (true/false) that indicates if the folder specified by the files-folder input should be searched recursively. The default value is false. |
files-folder-depth |
Optional | An integer value that indicates the depth of the recursive search toggled by the files-folder-recursive input. |
files-catalog |
Optional | A file containing a list of relative paths to the files being signed. The paths should be relative to the location of the catalog file. Each file path should be on a separate line. Can be combined with the files and files-folder inputs. |
file-digest |
Optional | The name of the digest algorithm used for hashing the files being signed. The supported values are SHA256, SHA384, and SHA512. Default: SHA256 |
timestamp-rfc3161 |
Optional | A URL to an RFC3161 compliant timestamping service. Default: http://timestamp.acs.microsoft.com |
timestamp-digest |
Optional | The name of the digest algorithm used for timestamping. The supported values are SHA256, SHA384, and SHA512. Default: SHA256 |
append-signature |
Optional | A boolean value (true/false) that indicates if the signature should be appended. If no primary signature is present, this signature is made the primary signature instead. |
description |
Optional | A description of the signed content. |
description-url |
Optional | A Uniform Resource Locator (URL) for the expanded description of the signed content. |
generate-digest-path |
Optional | Generates the digest to be signed and the unsigned PKCS7 files at this path. The output digest and PKCS7 files will be path\FileName.dig and path\FileName.p7u. To output an additional XML file, see `generate-digest-xml`. |
generate-digest-xml |
Optional | A boolean value (true/false) that indicates if an XML file is produced when the `generate-digest-path` input is used. The output file will be Path\FileName.dig.xml. |
ingest-digest-path |
Optional | Creates the signature by ingesting the signed digest to the unsigned PKCS7 file at this path. The input signed digest and unsigned PKCS7 files should be path\FileName.dig.signed and path\FileName.p7u. |
sign-digest |
Optional | A boolean value (true/false) that indicates if only the digest should be signed. The input file should be the digest generated by the `generate-digest-path` input. The output file will be File.signed. |
generate-page-hashes |
Optional | A boolean value (true/false) that indicates if page hashes should be generated for executable files. |
suppress-page-hashes |
Optional | A boolean value (true/false) that indicates if page hashes should be suppressed for executable files. The default is determined by the SIGNTOOL_PAGE_HASHES environment variable and by the wintrust.dll version. This input is ignored for non-PE files. |
generate-pkcs7 |
Optional | A boolean value (true/false) that indicates if a Public Key Cryptography Standards (PKCS) \#7 file is produced for each specified content file. PKCS \#7 files are named path\filename.p7. |
pkcs7-options |
Optional | Options for the signed PKCS \#7 content. Set value to "Embedded" to embed the signed content in the PKCS \#7 file, or to "DetachedSignedData" to produce the signed data portion of a detached PKCS \#7 file. If this input is not used, the signed content is embedded by default. |
pkcs7-oid |
Optional | The object identifier (OID) that identifies the signed PKCS \#7 content. |
enhanced-key-usage |
Optional | The enhanced key usage (EKU) that must be present in the signing certificate. The usage value can be specified by OID or string. The default usage is "Code Signing" (1.3.6.1.5.5.7.3.3). |
exclude-environment-credential |
Optional | Exclude the "EnvironmentCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-workload-identity-credential |
Optional | Exclude the "WorkloadIdentityCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-managed-identity-credential |
Optional | Exclude the "ManagedIdentity" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-shared-token-cache-credential |
Optional | Exclude the "SharedTokenCacheCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-visual-studio-credential |
Optional | Exclude the "VisualStudioCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-visual-studio-code-credential |
Optional | Exclude the "VisualStudioCodeCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-azure-cli-credential |
Optional | Exclude the "AzureCliCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-azure-powershell-credential |
Optional | Exclude the "AzurePowerShellCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-azure-developer-cli-credential |
Optional | Exclude the "AzureDeveloperCliCredential" type from being considered when authenticating with "DefaultAzureCredential". |
exclude-interactive-browser-credential |
Optional | Exclude the "InteractiveBrowserCredential" type from being considered when authenticating with "DefaultAzureCredential". Default: true |
timeout |
Optional | The number of seconds that the Trusted Signing service will wait for all files to be signed before it exits. The default value is 300 seconds. |
batch-size |
Optional | The summed length of file paths that can be signed with each signtool call. This parameter should only be relevant if you are signing a large number of files. Increasing the value may result in performance gains at the risk of potentially hitting your system's maximum command length limit. The minimum value is 0 and the maximum value is 30000. A value of 0 means that every file will be signed with an individual call to signtool. |
cache-dependencies |
Optional | A boolean value (true/false) that indicates if the dependencies for this action should be cached by GitHub or not. The default value is true. When using self-hosted runners, caches from workflow runs are stored on GitHub-owned cloud storage. A customer-owned storage solution is only available with GitHub Enterprise Server. When enabled, this option can reduce the duration of the action by at least 1 minute. Default: true |
trace |
Optional | A boolean value (true/false) that controls trace logging. The default value is false. Default: false |
clickonce-application-name |
Optional | The application name for any ClickOnce files being signed. |
clickonce-publisher-name |
Optional | The publisher name for any ClickOnce files being signed. |
correlation-id |
Optional | A unique identifier for the signing request. This value is used to track the signing request in the Trusted Signing service. |
name: 'Trusted Signing'
description: 'Sign your files with Trusted Signing.'
inputs:
azure-tenant-id:
description: The Azure Active Directory tenant (directory) ID.
required: false
azure-client-id:
description: The client (application) ID of an App Registration in the tenant.
required: false
azure-client-secret:
description: A client secret that was generated for the App Registration.
required: false
azure-client-certificate-path:
description: A path to certificate and private key pair in PEM or PFX format, which can authenticate
the App Registration.
required: false
azure-client-send-certificate-chain:
description: Specifies whether an authentication request will include an x5c header to support subject
name / issuer based authentication. When set to `true` or `1`, authentication requests
include the x5c header.
required: false
azure-username:
description: The username, also known as upn, of an Azure Active Directory user account.
required: false
azure-password:
description: The password of the Azure Active Directory user account. Note this does not support
accounts with MFA enabled.
required: false
endpoint:
description: The Trusted Signing Account endpoint. The URI value must have a URI that aligns to the
region your Trusted Signing Account and Certificate Profile you are specifying were created
in during the setup of these resources.
required: true
code-signing-account-name:
description: The Code Signing Account name.
required: false
trusted-signing-account-name:
description: The Trusted Signing Account name.
required: false
certificate-profile-name:
description: The Certificate Profile name.
required: true
files:
description: A comma or newline separated list of absolute paths to the files being signed. Can be combined
with the files-folder and file-catalog inputs.
required: false
files-folder:
description: The folder containing files to be signed. Can be combined with the files and file-catalog inputs.
required: false
files-folder-filter:
description: A comma separated list of file extensions that determines which types of files will
be signed in the folder specified by the files-folder input. E.g., 'dll,exe,msix'.
Any file type not included in this list will not be signed. If this input is not used,
all files in the folder will be signed. Supports wildcards for matching multiple file
names with a pattern.
required: false
files-folder-recurse:
description: A boolean value (true/false) that indicates if the folder specified by the files-folder
input should be searched recursively. The default value is false.
required: false
files-folder-depth:
description: An integer value that indicates the depth of the recursive search toggled by the
files-folder-recursive input.
required: false
files-catalog:
description: A file containing a list of relative paths to the files being signed. The paths
should be relative to the location of the catalog file. Each file path should be on
a separate line. Can be combined with the files and files-folder inputs.
required: false
file-digest:
description: The name of the digest algorithm used for hashing the files being signed. The supported
values are SHA256, SHA384, and SHA512.
required: false
default: 'SHA256'
timestamp-rfc3161:
description: A URL to an RFC3161 compliant timestamping service.
required: false
default: 'http://timestamp.acs.microsoft.com'
timestamp-digest:
description: The name of the digest algorithm used for timestamping. The supported values are SHA256,
SHA384, and SHA512.
required: false
default: 'SHA256'
append-signature:
description: A boolean value (true/false) that indicates if the signature should be appended. If
no primary signature is present, this signature is made the primary signature instead.
required: false
description:
description: A description of the signed content.
required: false
description-url:
description: A Uniform Resource Locator (URL) for the expanded description of the signed content.
required: false
generate-digest-path:
description: Generates the digest to be signed and the unsigned PKCS7 files at this path. The output
digest and PKCS7 files will be path\FileName.dig and path\FileName.p7u. To output an additional
XML file, see `generate-digest-xml`.
required: false
generate-digest-xml:
description: A boolean value (true/false) that indicates if an XML file is produced when the
`generate-digest-path` input is used. The output file will be Path\FileName.dig.xml.
required: false
ingest-digest-path:
description: Creates the signature by ingesting the signed digest to the unsigned PKCS7 file at this
path. The input signed digest and unsigned PKCS7 files should be path\FileName.dig.signed
and path\FileName.p7u.
required: false
sign-digest:
description: A boolean value (true/false) that indicates if only the digest should be signed. The
input file should be the digest generated by the `generate-digest-path` input. The output
file will be File.signed.
required: false
generate-page-hashes:
description: A boolean value (true/false) that indicates if page hashes should be generated for
executable files.
required: false
suppress-page-hashes:
description: A boolean value (true/false) that indicates if page hashes should be suppressed for
executable files. The default is determined by the SIGNTOOL_PAGE_HASHES environment
variable and by the wintrust.dll version. This input is ignored for non-PE files.
required: false
generate-pkcs7:
description: A boolean value (true/false) that indicates if a Public Key Cryptography Standards
(PKCS) \#7 file is produced for each specified content file. PKCS \#7 files are named
path\filename.p7.
required: false
pkcs7-options:
description: Options for the signed PKCS \#7 content. Set value to "Embedded" to embed the signed
content in the PKCS \#7 file, or to "DetachedSignedData" to produce the signed data
portion of a detached PKCS \#7 file. If this input is not used, the signed content is
embedded by default.
required: false
pkcs7-oid:
description: The object identifier (OID) that identifies the signed PKCS \#7 content.
required: false
enhanced-key-usage:
description: The enhanced key usage (EKU) that must be present in the signing certificate. The usage
value can be specified by OID or string. The default usage is "Code Signing" (1.3.6.1.5.5.7.3.3).
required: false
exclude-environment-credential:
description: Exclude the "EnvironmentCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-workload-identity-credential:
description: Exclude the "WorkloadIdentityCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-managed-identity-credential:
description: Exclude the "ManagedIdentity" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-shared-token-cache-credential:
description: Exclude the "SharedTokenCacheCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-visual-studio-credential:
description: Exclude the "VisualStudioCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-visual-studio-code-credential:
description: Exclude the "VisualStudioCodeCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-azure-cli-credential:
description: Exclude the "AzureCliCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-azure-powershell-credential:
description: Exclude the "AzurePowerShellCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-azure-developer-cli-credential:
description: Exclude the "AzureDeveloperCliCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
exclude-interactive-browser-credential:
description: Exclude the "InteractiveBrowserCredential" type from being considered when authenticating with "DefaultAzureCredential".
required: false
default: 'true'
timeout:
description: The number of seconds that the Trusted Signing service will wait for all files
to be signed before it exits. The default value is 300 seconds.
required: false
batch-size:
description: The summed length of file paths that can be signed with each signtool call. This parameter
should only be relevant if you are signing a large number of files. Increasing the value
may result in performance gains at the risk of potentially hitting your system's maximum
command length limit. The minimum value is 0 and the maximum value is 30000. A value of
0 means that every file will be signed with an individual call to signtool.
required: false
cache-dependencies:
description: A boolean value (true/false) that indicates if the dependencies for this action should be cached
by GitHub or not. The default value is true. When using self-hosted runners, caches from workflow
runs are stored on GitHub-owned cloud storage. A customer-owned storage solution is only available
with GitHub Enterprise Server. When enabled, this option can reduce the duration of the action by
at least 1 minute.
required: false
default: 'true'
trace:
description: A boolean value (true/false) that controls trace logging. The default value is false.
required: false
default: 'false'
clickonce-application-name:
description: The application name for any ClickOnce files being signed.
required: false
clickonce-publisher-name:
description: The publisher name for any ClickOnce files being signed.
required: false
correlation-id:
description: A unique identifier for the signing request. This value is used to track the signing request
in the Trusted Signing service.
required: false
runs:
using: 'composite'
steps:
- name: Set variables
id: set-variables
shell: 'pwsh'
run: |
$defaultPath = $env:PSModulePath -split ';' | Select-Object -First 1
"PSMODULEPATH=$defaultPath" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
"TRUSTED_SIGNING_MODULE_VERSION=0.5.8" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
"BUILD_TOOLS_NUGET_VERSION=10.0.26100.4188" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
"TRUSTED_SIGNING_NUGET_VERSION=1.0.95" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
"DOTNET_SIGNCLI_NUGET_VERSION=0.9.1-beta.25330.2" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
- name: Cache TrustedSigning PowerShell module
id: cache-module
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
env:
cache-name: cache-module
with:
path: ${{ steps.set-variables.outputs.PSMODULEPATH }}\TrustedSigning\${{ steps.set-variables.outputs.TRUSTED_SIGNING_MODULE_VERSION }}
key: TrustedSigning-${{ steps.set-variables.outputs.TRUSTED_SIGNING_MODULE_VERSION }}
if: ${{ inputs.cache-dependencies == 'true' }}
- name: Cache Microsoft.Windows.SDK.BuildTools NuGet package
id: cache-buildtools
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
env:
cache-name: cache-buildtools
with:
path: ~\AppData\Local\TrustedSigning\Microsoft.Windows.SDK.BuildTools\Microsoft.Windows.SDK.BuildTools.${{ steps.set-variables.outputs.BUILD_TOOLS_NUGET_VERSION }}
key: Microsoft.Windows.SDK.BuildTools-${{ steps.set-variables.outputs.BUILD_TOOLS_NUGET_VERSION }}
if: ${{ inputs.cache-dependencies == 'true' }}
- name: Cache Microsoft.Trusted.Signing.Client NuGet package
id: cache-tsclient
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
env:
cache-name: cache-tsclient
with:
path: ~\AppData\Local\TrustedSigning\Microsoft.Trusted.Signing.Client\Microsoft.Trusted.Signing.Client.${{ steps.set-variables.outputs.TRUSTED_SIGNING_NUGET_VERSION }}
key: Microsoft.Trusted.Signing.Client-${{ steps.set-variables.outputs.TRUSTED_SIGNING_NUGET_VERSION }}
if: ${{ inputs.cache-dependencies == 'true' }}
- name: Cache SignCli NuGet package
id: cache-signcli
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
env:
cache-name: cache-signcli
with:
path: ~\AppData\Local\TrustedSigning\sign\sign.${{ steps.set-variables.outputs.DOTNET_SIGNCLI_NUGET_VERSION }}
key: SignCli-${{ steps.set-variables.outputs.DOTNET_SIGNCLI_NUGET_VERSION }}
if: ${{ inputs.cache-dependencies == 'true' }}
- name: Install Trusted Signing module
shell: 'pwsh'
run: |
Install-Module -Name TrustedSigning -RequiredVersion ${{ steps.set-variables.outputs.TRUSTED_SIGNING_MODULE_VERSION }} -Force -Repository PSGallery
if: ${{ inputs.cache-dependencies != 'true' || steps.cache-module.outputs.cache-hit != 'true' }}
- name: Invoke signing
env:
AZURE_TENANT_ID: ${{ inputs.azure-tenant-id }}
AZURE_CLIENT_ID: ${{ inputs.azure-client-id }}
AZURE_CLIENT_SECRET: ${{ inputs.azure-client-secret }}
AZURE_CLIENT_CERTIFICATE_PATH: ${{ inputs.azure-client-certificate-path }}
AZURE_CLIENT_SEND_CERTIFICATE_CHAIN: ${{ inputs.azure-client-send-certificate-chain }}
AZURE_USERNAME: ${{ inputs.azure-username }}
AZURE_PASSWORD: ${{ inputs.azure-password }}
INPUT_ENDPOINT: ${{ inputs.endpoint }}
INPUT_TRUSTED_SIGNING_ACCOUNT_NAME: ${{ inputs.trusted-signing-account-name }}
INPUT_CODE_SIGNING_ACCOUNT_NAME: ${{ inputs.code-signing-account-name }}
INPUT_CERTIFICATE_PROFILE_NAME: ${{ inputs.certificate-profile-name }}
INPUT_FILES: ${{ inputs.files }}
INPUT_FILES_FOLDER: ${{ inputs.files-folder }}
INPUT_FILES_FOLDER_FILTER: ${{ inputs.files-folder-filter }}
INPUT_FILES_FOLDER_RECURSE: ${{ inputs.files-folder-recurse }}
INPUT_FILES_FOLDER_DEPTH: ${{ inputs.files-folder-depth }}
INPUT_FILES_CATALOG: ${{ inputs.files-catalog }}
INPUT_FILE_DIGEST: ${{ inputs.file-digest }}
INPUT_TIMESTAMP_RFC3161: ${{ inputs.timestamp-rfc3161 }}
INPUT_TIMESTAMP_DIGEST: ${{ inputs.timestamp-digest }}
INPUT_APPEND_SIGNATURE: ${{ inputs.append-signature }}
INPUT_DESCRIPTION: ${{ inputs.description }}
INPUT_DESCRIPTION_URL: ${{ inputs.description-url }}
INPUT_GENERATE_DIGEST_PATH: ${{ inputs.generate-digest-path }}
INPUT_GENERATE_DIGEST_XML: ${{ inputs.generate-digest-xml }}
INPUT_INGEST_DIGEST_PATH: ${{ inputs.ingest-digest-path }}
INPUT_SIGN_DIGEST: ${{ inputs.sign-digest }}
INPUT_GENERATE_PAGE_HASHES: ${{ inputs.generate-page-hashes }}
INPUT_SUPPRESS_PAGE_HASHES: ${{ inputs.suppress-page-hashes }}
INPUT_GENERATE_PKCS7: ${{ inputs.generate-pkcs7 }}
INPUT_PKCS7_OPTIONS: ${{ inputs.pkcs7-options }}
INPUT_PKCS7_OID: ${{ inputs.pkcs7-oid }}
INPUT_ENHANCED_KEY_USAGE: ${{ inputs.enhanced-key-usage }}
INPUT_EXCLUDE_ENVIRONMENT_CREDENTIAL: ${{ inputs.exclude-environment-credential }}
INPUT_EXCLUDE_WORKLOAD_IDENTITY_CREDENTIAL: ${{ inputs.exclude-workload-identity-credential }}
INPUT_EXCLUDE_MANAGED_IDENTITY_CREDENTIAL: ${{ inputs.exclude-managed-identity-credential }}
INPUT_EXCLUDE_SHARED_TOKEN_CACHE_CREDENTIAL: ${{ inputs.exclude-shared-token-cache-credential }}
INPUT_EXCLUDE_VISUAL_STUDIO_CREDENTIAL: ${{ inputs.exclude-visual-studio-credential }}
INPUT_EXCLUDE_VISUAL_STUDIO_CODE_CREDENTIAL: ${{ inputs.exclude-visual-studio-code-credential }}
INPUT_EXCLUDE_AZURE_CLI_CREDENTIAL: ${{ inputs.exclude-azure-cli-credential }}
INPUT_EXCLUDE_AZURE_POWERSHELL_CREDENTIAL: ${{ inputs.exclude-azure-powershell-credential }}
INPUT_EXCLUDE_AZURE_DEVELOPER_CLI_CREDENTIAL: ${{ inputs.exclude-azure-developer-cli-credential }}
INPUT_EXCLUDE_INTERACTIVE_BROWSER_CREDENTIAL: ${{ inputs.exclude-interactive-browser-credential }}
INPUT_TIMEOUT: ${{ inputs.timeout }}
INPUT_BATCH_SIZE: ${{ inputs.batch-size }}
INPUT_TRACE: ${{ inputs.trace }}
INPUT_CLICKONCE_APPLICATION_NAME: ${{ inputs.clickonce-application-name }}
INPUT_CLICKONCE_PUBLISHER_NAME: ${{ inputs.clickonce-publisher-name }}
run: |
$params = @{}
$endpoint = $env:INPUT_ENDPOINT
if ([string]::IsNullOrWhiteSpace($endpoint)) {
throw "The 'endpoint' input is required."
}
if (-Not [string]::IsNullOrWhiteSpace($endpoint)) {
$params["Endpoint"] = $endpoint
}
$trustedSigningAccountName = $env:INPUT_TRUSTED_SIGNING_ACCOUNT_NAME
if ([string]::IsNullOrWhiteSpace($trustedSigningAccountName)) {
$trustedSigningAccountName = $env:INPUT_CODE_SIGNING_ACCOUNT_NAME
if (-Not [string]::IsNullOrWhiteSpace($trustedSigningAccountName)) {
Write-Warning "The 'code-signing-account-name' input is deprecated. Please use 'trusted-signing-account-name' instead."
} else {
throw "The 'trusted-signing-account-name' input is required."
}
}
$params["CodeSigningAccountName"] = $trustedSigningAccountName
$certificateProfileName = $env:INPUT_CERTIFICATE_PROFILE_NAME
if (-Not [string]::IsNullOrWhiteSpace($certificateProfileName)) {
$params["CertificateProfileName"] = $certificateProfileName
}
$files = $env:INPUT_FILES
if (-Not [string]::IsNullOrWhiteSpace($files)) {
$params["Files"] = $files.replace("`n", ",")
}
$filesFolder = $env:INPUT_FILES_FOLDER
if (-Not [string]::IsNullOrWhiteSpace($filesFolder)) {
$params["FilesFolder"] = $filesFolder
}
$filesFolderFilter = $env:INPUT_FILES_FOLDER_FILTER
if (-Not [string]::IsNullOrWhiteSpace($filesFolderFilter)) {
$params["FilesFolderFilter"] = $filesFolderFilter
}
$filesFolderRecurse = $env:INPUT_FILES_FOLDER_RECURSE
if (-Not [string]::IsNullOrWhiteSpace($filesFolderRecurse)) {
$params["FilesFolderRecurse"] = [System.Convert]::ToBoolean($filesFolderRecurse)
}
$filesFolderDepth = $env:INPUT_FILES_FOLDER_DEPTH
if (-Not [string]::IsNullOrWhiteSpace($filesFolderDepth)) {
$params["FilesFolderDepth"] = [System.Convert]::ToInt32($filesFolderDepth)
}
$filesCatalog = $env:INPUT_FILES_CATALOG
if (-Not [string]::IsNullOrWhiteSpace($filesCatalog)) {
$params["FilesCatalog"] = $filesCatalog
}
$fileDigest = $env:INPUT_FILE_DIGEST
if (-Not [string]::IsNullOrWhiteSpace($fileDigest)) {
$params["FileDigest"] = $fileDigest
}
$timestampRfc3161 = $env:INPUT_TIMESTAMP_RFC3161
if (-Not [string]::IsNullOrWhiteSpace($timestampRfc3161)) {
$params["TimestampRfc3161"] = $timestampRfc3161
}
$timestampDigest = $env:INPUT_TIMESTAMP_DIGEST
if (-Not [string]::IsNullOrWhiteSpace($timestampDigest)) {
$params["TimestampDigest"] = $timestampDigest
}
$appendSignature = $env:INPUT_APPEND_SIGNATURE
if (-Not [string]::IsNullOrWhiteSpace($appendSignature)) {
$params["AppendSignature"] = [System.Convert]::ToBoolean($appendSignature)
}
$description = $env:INPUT_DESCRIPTION
if (-Not [string]::IsNullOrWhiteSpace($description)) {
$params["Description"] = $description
}
$descriptionUrl = $env:INPUT_DESCRIPTION_URL
if (-Not [string]::IsNullOrWhiteSpace($descriptionUrl)) {
$params["DescriptionUrl"] = $descriptionUrl
}
$generateDigestPath = $env:INPUT_GENERATE_DIGEST_PATH
if (-Not [string]::IsNullOrWhiteSpace($generateDigestPath)) {
$params["GenerateDigestPath"] = $generateDigestPath
}
$generateDigestXml = $env:INPUT_GENERATE_DIGEST_XML
if (-Not [string]::IsNullOrWhiteSpace($generateDigestXml)) {
$params["GenerateDigestXml"] = [System.Convert]::ToBoolean($generateDigestXml)
}
$ingestDigestPath = $env:INPUT_INGEST_DIGEST_PATH
if (-Not [string]::IsNullOrWhiteSpace($ingestDigestPath)) {
$params["IngestDigestPath"] = $ingestDigestPath
}
$signDigest = $env:INPUT_SIGN_DIGEST
if (-Not [string]::IsNullOrWhiteSpace($signDigest)) {
$params["SignDigest"] = [System.Convert]::ToBoolean($signDigest)
}
$generatePageHashes = $env:INPUT_GENERATE_PAGE_HASHES
if (-Not [string]::IsNullOrWhiteSpace($generatePageHashes)) {
$params["GeneratePageHashes"] = [System.Convert]::ToBoolean($generatePageHashes)
}
$suppressPageHashes = $env:INPUT_SUPPRESS_PAGE_HASHES
if (-Not [string]::IsNullOrWhiteSpace($suppressPageHashes)) {
$params["SuppressPageHashes"] = [System.Convert]::ToBoolean($suppressPageHashes)
}
$generatePkcs7 = $env:INPUT_GENERATE_PKCS7
if (-Not [string]::IsNullOrWhiteSpace($generatePkcs7)) {
$params["GeneratePkcs7"] = [System.Convert]::ToBoolean($generatePkcs7)
}
$pkcs7Options = $env:INPUT_PKCS7_OPTIONS
if (-Not [string]::IsNullOrWhiteSpace($pkcs7Options)) {
$params["Pkcs7Options"] = $pkcs7Options
}
$pkcs7Oid = $env:INPUT_PKCS7_OID
if (-Not [string]::IsNullOrWhiteSpace($pkcs7Oid)) {
$params["Pkcs7Oid"] = $pkcs7Oid
}
$enhancedKeyUsage = $env:INPUT_ENHANCED_KEY_USAGE
if (-Not [string]::IsNullOrWhiteSpace($enhancedKeyUsage)) {
$params["EnhancedKeyUsage"] = $enhancedKeyUsage
}
$excludeEnvironmentCredential = $env:INPUT_EXCLUDE_ENVIRONMENT_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeEnvironmentCredential)) {
$params["ExcludeEnvironmentCredential"] = [System.Convert]::ToBoolean($excludeEnvironmentCredential)
}
$excludeWorkloadIdentityCredential = $env:INPUT_EXCLUDE_WORKLOAD_IDENTITY_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeWorkloadIdentityCredential)) {
$params["ExcludeWorkloadIdentityCredential"] = [System.Convert]::ToBoolean($excludeWorkloadIdentityCredential)
}
$excludeManagedIdentityCredential = $env:INPUT_EXCLUDE_MANAGED_IDENTITY_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeManagedIdentityCredential)) {
$params["ExcludeManagedIdentityCredential"] = [System.Convert]::ToBoolean($excludeManagedIdentityCredential)
}
$excludeSharedTokenCacheCredential = $env:INPUT_EXCLUDE_SHARED_TOKEN_CACHE_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeSharedTokenCacheCredential)) {
$params["ExcludeSharedTokenCacheCredential"] = [System.Convert]::ToBoolean($excludeSharedTokenCacheCredential)
}
$excludeVisualStudioCredential = $env:INPUT_EXCLUDE_VISUAL_STUDIO_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeVisualStudioCredential)) {
$params["ExcludeVisualStudioCredential"] = [System.Convert]::ToBoolean($excludeVisualStudioCredential)
}
$excludeVisualStudioCodeCredential = $env:INPUT_EXCLUDE_VISUAL_STUDIO_CODE_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeVisualStudioCodeCredential)) {
$params["ExcludeVisualStudioCodeCredential"] = [System.Convert]::ToBoolean($excludeVisualStudioCodeCredential)
}
$excludeAzureCliCredential = $env:INPUT_EXCLUDE_AZURE_CLI_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeAzureCliCredential)) {
$params["ExcludeAzureCliCredential"] = [System.Convert]::ToBoolean($excludeAzureCliCredential)
}
$excludeAzurePowerShellCredential = $env:INPUT_EXCLUDE_AZURE_POWERSHELL_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeAzurePowerShellCredential)) {
$params["ExcludeAzurePowerShellCredential"] = [System.Convert]::ToBoolean($excludeAzurePowerShellCredential)
}
$excludeAzureDeveloperCliCredential = $env:INPUT_EXCLUDE_AZURE_DEVELOPER_CLI_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeAzureDeveloperCliCredential)) {
$params["ExcludeAzureDeveloperCliCredential"] = [System.Convert]::ToBoolean($excludeAzureDeveloperCliCredential)
}
$excludeInteractiveBrowserCredential = $env:INPUT_EXCLUDE_INTERACTIVE_BROWSER_CREDENTIAL
if (-Not [string]::IsNullOrWhiteSpace($excludeInteractiveBrowserCredential)) {
$params["ExcludeInteractiveBrowserCredential"] = [System.Convert]::ToBoolean($excludeInteractiveBrowserCredential)
}
$timeout = $env:INPUT_TIMEOUT
if (-Not [string]::IsNullOrWhiteSpace($timeout)) {
$params["Timeout"] = [System.Convert]::ToInt32($timeout)
}
$batchSize = $env:INPUT_BATCH_SIZE
if (-Not [string]::IsNullOrWhiteSpace($batchSize)) {
$params["BatchSize"] = [System.Convert]::ToInt32($batchSize)
}
$trace = $env:INPUT_TRACE
if (-Not [string]::IsNullOrWhiteSpace($trace)) {
if ([System.Convert]::ToBoolean($trace)) {
Set-PSDebug -Trace 2
}
}
$clickOnceApplicationName = $env:INPUT_CLICKONCE_APPLICATION_NAME
if (-Not [string]::IsNullOrWhiteSpace($clickOnceApplicationName)) {
$params["ClickOnceApplicationName"] = $clickOnceApplicationName
}
$clickOncePublisherName = $env:INPUT_CLICKONCE_PUBLISHER_NAME
if (-Not [string]::IsNullOrWhiteSpace($clickOncePublisherName)) {
$params["ClickOncePublisherName"] = $clickOncePublisherName
}
Invoke-TrustedSigning @params
shell: pwsh
Action ID: marketplace/actions/hello-world-javascript-action
Author: GitHub Actions
Publisher: actions
Repository: github.com/actions/hello-world-javascript-action
Greet someone and record the time
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Who to greet Default: World |
| Name | Description |
|---|---|
time |
The time we greeted you |
name: Hello, World!
description: Greet someone and record the time
author: GitHub Actions
# Define your inputs here.
inputs:
who-to-greet:
description: Who to greet
required: true
default: World
# Define your outputs here.
outputs:
time:
description: The time we greeted you
runs:
using: node24
main: dist/index.js
Action ID: marketplace/appleboy/azure-blob-storage-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/azure-blob-storage-action
Update ExpiresAt of a blob in Azure Blob storage
| Name | Required | Description |
|---|---|---|
account_name |
Required | Account name of Azure Blob storage |
account_key |
Required | Account key of Azure Blob storage |
container_name |
Required | Container name of Azure Blob storage |
blob_name |
Required | Blob name of Azure Blob storage |
duration |
Optional | ExpiresAt duration Default: 1h |
| Name | Description |
|---|---|
expire_at |
ExpiresAt of the blob |
blob_url |
URL of the blob |
expire_at_unix |
ExpiresAt of the blob in Unix timestamp |
name: "Azure Blob storage Tool"
description: "Update ExpiresAt of a blob in Azure Blob storage"
author: "Bo-Yi Wu"
inputs:
account_name:
description: "Account name of Azure Blob storage"
required: true
account_key:
description: "Account key of Azure Blob storage"
required: true
container_name:
description: "Container name of Azure Blob storage"
required: true
blob_name:
description: "Blob name of Azure Blob storage"
required: true
duration:
description: "ExpiresAt duration"
default: "1h"
outputs:
expire_at:
description: "ExpiresAt of the blob"
blob_url:
description: "URL of the blob"
expire_at_unix:
description: "ExpiresAt of the blob in Unix timestamp"
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "cloud"
color: "blue"
Action ID: marketplace/appleboy/github-action-sample
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/github-action-sample
GitHub Action Sample
| Name | Required | Description |
|---|---|---|
github_token |
Optional | github token |
name: 'GitHub Action Sample'
description: 'GitHub Action Sample'
author: 'Bo-Yi Wu'
inputs:
github_token:
description: 'github token'
default: ''
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/azure/setup-helm
Author: Unknown
Publisher: azure
Repository: github.com/azure/setup-helm
Install a specific version of helm binary. Acceptable values are latest or any semantic version string like 1.15.0
| Name | Required | Description |
|---|---|---|
version |
Required | Version of helm Default: latest |
token |
Optional | GitHub token. Used to be required to fetch the latest version Default: ${{ github.token }} |
downloadBaseURL |
Optional | Set the download base URL Default: https://get.helm.sh |
| Name | Description |
|---|---|
helm-path |
Path to the cached helm binary |
name: 'Helm tool installer'
description: 'Install a specific version of helm binary. Acceptable values are latest or any semantic version string like 1.15.0'
inputs:
version:
description: 'Version of helm'
required: true
default: 'latest'
token:
description: GitHub token. Used to be required to fetch the latest version
required: false
deprecationMessage: 'GitHub token is no longer required'
default: '${{ github.token }}'
downloadBaseURL:
description: 'Set the download base URL'
required: false
default: 'https://get.helm.sh'
outputs:
helm-path:
description: 'Path to the cached helm binary'
branding:
color: 'blue'
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/Burnett01/actions-drawio
Author: Burnett01
Publisher: Burnett01
Repository: github.com/Burnett01/actions-drawio
Convert draw.io documents to jpeg|png|svg|pdf.
| Name | Required | Description |
|---|---|---|
quality |
Optional | Image quality for png|jpeg Default: 100 |
scale |
Optional | Scales the output file size for pixel-based output formats Default: 1.0 |
src |
Required | Input file |
dest |
Required | Output file |
name: 'actions-drawio'
author: 'Burnett01'
description: 'Convert draw.io documents to jpeg|png|svg|pdf.'
inputs:
quality:
description: 'Image quality for png|jpeg'
required: false
default: '100'
scale:
description: 'Scales the output file size for pixel-based output formats'
required: false
default: '1.0'
src:
description: 'Input file'
required: true
dest:
description: 'Output file'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'image'
color: 'gray-dark'
Action ID: marketplace/mxschmitt/setup-miniconda
Author: Gonzalo Peña-Castellanos (@goanpeca)
Publisher: mxschmitt
Repository: github.com/mxschmitt/setup-miniconda
Set up Conda package and environment manager with Miniconda.
| Name | Required | Description |
|---|---|---|
installer-url |
Optional | If provided, this installer will be used instead of a miniconda installer, and cached based on its full URL Visit https://github.com/conda/constructor for more information on creating installers |
miniconda-version |
Optional | If provided, this version of Miniconda3 will be downloaded and installed. Visit https://repo.continuum.io/miniconda/ for more information on available versions. |
miniforge-variant |
Optional | If provided, this variant of Miniforge will be downloaded and installed. If `miniforge-version` is not provided, the `latest` version will be used. Currently-known values: - Miniforge3 (default) - Miniforge-pypy3 - Mambaforge - Mambaforge-pypy3 Visit https://github.com/conda-forge/miniforge/releases/ for more information on available variants |
miniforge-version |
Optional | If provided, this version of the given Miniforge variant will be downloaded and installed. If `miniforge-variant` is not provided, `Miniforge3` will be used. Visit https://github.com/conda-forge/miniforge/releases/ for more information on available versions |
conda-version |
Optional | Specific version of Conda to install after miniconda is located or installed. See https://anaconda.org/anaconda/conda for available "conda" versions. |
conda-build-version |
Optional | Version of conda build to install. If not provided conda-build is not installed. See https://anaconda.org/anaconda/conda-build for available "conda-build" versions. |
environment-file |
Optional | Environment.yml to create an environment. See https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file for more information. |
activate-environment |
Optional | Environment name (or path) to activate on all shells. Default is `test` which will be created in `$CONDA/envs/test`. If an empty string is used, no environment is activated by default (For `base` activation see the `auto-activate-base` option). If the environment does not exist, it will be created and activated. If `environment-file` is used and you want that to be the environment used, you need to explicitely provide the name of that environment on `activate-environment`. If using sh/bash/cmd.exe shells please read the IMPORTANT! section on the README.md! to properly activate conda environments on these shells. Default: test |
python-version |
Optional | Exact version of a Python version to use on "activate-environment". If provided, this will be installed before the "environment-file". See https://anaconda.org/anaconda/python for available "python" versions. |
add-anaconda-token |
Optional | Conda configuration. When the channel alias is Anaconda.org or an Anaconda Server GUI, you can set the system configuration so that users automatically see private packages. Anaconda.org was formerly known as binstar.org. This uses the Anaconda command-line client, which you can install with conda install anaconda-client, to automatically add the token to the channel URLs. The default is "true". See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#add-anaconda-org-token-to-automatically-see-private-packages-add-anaconda-token for more information. |
add-pip-as-python-dependency |
Optional | Conda configuration. Add pip, wheel, and setuptools as dependencies of Python. This ensures that pip, wheel, and setuptools are always installed any time Python is installed. The default is "true". See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#add-pip-as-python-dependency-add-pip-as-python-dependency for more information. |
allow-softlinks |
Optional | Conda configuration. When allow_softlinks is "true", conda uses hard-links when possible and soft-links---symlinks---when hard-links are not possible, such as when installing on a different file system than the one that the package cache is on. When allow_softlinks is "false", conda still uses hard-links when possible, but when it is not possible, conda copies files. Individual packages can override this option, specifying that certain files should never be soft-linked. The default is "true". See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#disallow-soft-linking-allow-softlinks for more information. |
auto-activate-base |
Optional | Conda configuration. If you’d prefer that conda’s base environment not be activated on startup, set the to "false". Default is "true". This setting always overrides if set to "true" or "false". If you want to use the "condarc-file" setting pass and empty string. See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/ for more information. Default: true |
auto-update-conda |
Optional | Conda configuration. When "true", conda updates itself any time a user updates or installs a package in the base environment. When "false", conda updates itself only if the user manually issues a conda update command. The default is "true". This setting always overrides if set to "true" or "false". If you want to use the "condarc-file" setting pass and empty string. See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/ for more information. Default: false |
condarc-file |
Optional | Conda configuration. Path to a conda configuration file to use for the runner. This file will be copied to "~/.condarc". See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/ for more information. |
channel-alias |
Optional | Conda configuration. Whenever you use the -c or --channel flag to give conda a channel name that is not a URL, conda prepends the channel_alias to the name that it was given. The default channel_alias is https://conda.anaconda.org. See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#set-a-channel-alias-channel-alias for more information. |
channel-priority |
Optional | Conda configuration. Accepts values of "strict", "flexible", and "disabled". The default value is "flexible". With strict channel priority, packages in lower priority channels are not considered if a package with the same name appears in a higher priority channel. With flexible channel priority, the solver may reach into lower priority channels to fulfill dependencies, rather than raising an unsatisfiable error. With channel priority disabled, package version takes precedence, and the configured priority of channels is used only to break ties. In previous versions of conda, this parameter was configured as either "true" or "false". "true" is now an alias to "flexible". See https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-channels.html#strict-channel-priority for more information. |
channels |
Optional | Conda configuration. Comma separated list of channels to use in order of priority. See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/ for more information. |
show-channel-urls |
Optional | Conda configuration. Show channel URLs when displaying what is going to be downloaded and in conda list. The default is "false". See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#show-channel-urls-show-channel-urls for more information. |
use-only-tar-bz2 |
Optional | Conda configuration. Conda 4.7 introduced a new .conda package file format. .conda is a more compact and faster alternative to .tar.bz2 packages. It is thus the preferred file format to use where available. Nevertheless, it is possible to force conda to only download .tar.bz2 packages by setting the use_only_tar_bz2 boolean to "true". The default is "false". See https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#force-conda-to-download-only-tar-bz2-packages-use-only-tar-bz2 for more information. |
remove-profiles |
Optional | Advanced. Prior to runnning "conda init" all shell profiles will be removed from the runner. Default is "true". Default: true |
mamba-version |
Optional | Experimental. Use mamba (https://github.com/QuantStack/mamba) as a faster drop-in replacement for conda installs. Disabled by default. To enable, use "*" or a "x.y" version string. |
use-mamba |
Optional | Experimental. Use mamba as soon as available (either as provided by `mamba-in-installer` or installation by `mamba-version`) |
architecture |
Optional | Architecture of Miniconda that should be installed. Available options on GitHub-hosted runners are "x86" and "x64". Default is "x64". Default: x64 |
name: "Setup Miniconda"
author: Gonzalo Peña-Castellanos (@goanpeca)
description: "Set up Conda package and environment manager with Miniconda."
inputs:
installer-url:
description:
"If provided, this installer will be used instead of a miniconda
installer, and cached based on its full URL Visit
https://github.com/conda/constructor for more information on creating
installers"
required: false
default: ""
miniconda-version:
description:
"If provided, this version of Miniconda3 will be downloaded and installed.
Visit https://repo.continuum.io/miniconda/ for more information on
available versions."
required: false
default: ""
miniforge-variant:
description:
"If provided, this variant of Miniforge will be downloaded and installed.
If `miniforge-version` is not provided, the `latest` version will be used.
Currently-known values: - Miniforge3 (default) - Miniforge-pypy3 -
Mambaforge - Mambaforge-pypy3 Visit
https://github.com/conda-forge/miniforge/releases/ for more information on
available variants"
required: false
default: ""
miniforge-version:
description:
"If provided, this version of the given Miniforge variant will be
downloaded and installed. If `miniforge-variant` is not provided,
`Miniforge3` will be used. Visit
https://github.com/conda-forge/miniforge/releases/ for more information on
available versions"
required: false
default: ""
conda-version:
description:
'Specific version of Conda to install after miniconda is located or
installed. See https://anaconda.org/anaconda/conda for available "conda"
versions.'
required: false
default: ""
conda-build-version:
description:
'Version of conda build to install. If not provided conda-build is not
installed. See https://anaconda.org/anaconda/conda-build for available
"conda-build" versions.'
required: false
default: ""
environment-file:
description:
"Environment.yml to create an environment. See
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file
for more information."
required: false
default: ""
activate-environment:
description:
"Environment name (or path) to activate on all shells. Default is `test`
which will be created in `$CONDA/envs/test`. If an empty string is used,
no environment is activated by default (For `base` activation see the
`auto-activate-base` option). If the environment does not exist, it will
be created and activated. If `environment-file` is used and you want that
to be the environment used, you need to explicitely provide the name of
that environment on `activate-environment`. If using sh/bash/cmd.exe
shells please read the IMPORTANT! section on the README.md! to properly
activate conda environments on these shells."
required: false
default: "test"
python-version:
description:
'Exact version of a Python version to use on "activate-environment". If
provided, this will be installed before the "environment-file". See
https://anaconda.org/anaconda/python for available "python" versions.'
required: false
default: ""
add-anaconda-token:
description:
'Conda configuration. When the channel alias is Anaconda.org or an
Anaconda Server GUI, you can set the system configuration so that users
automatically see private packages. Anaconda.org was formerly known as
binstar.org. This uses the Anaconda command-line client, which you can
install with conda install anaconda-client, to automatically add the token
to the channel URLs. The default is "true". See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#add-anaconda-org-token-to-automatically-see-private-packages-add-anaconda-token
for more information.'
required: false
default: ""
add-pip-as-python-dependency:
description:
'Conda configuration. Add pip, wheel, and setuptools as dependencies of
Python. This ensures that pip, wheel, and setuptools are always installed
any time Python is installed. The default is "true". See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#add-pip-as-python-dependency-add-pip-as-python-dependency
for more information.'
required: false
default: ""
allow-softlinks:
description:
'Conda configuration. When allow_softlinks is "true", conda uses
hard-links when possible and soft-links---symlinks---when hard-links are
not possible, such as when installing on a different file system than the
one that the package cache is on. When allow_softlinks is "false", conda
still uses hard-links when possible, but when it is not possible, conda
copies files. Individual packages can override this option, specifying
that certain files should never be soft-linked. The default is "true". See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#disallow-soft-linking-allow-softlinks
for more information.'
required: false
default: ""
auto-activate-base:
description:
'Conda configuration. If you’d prefer that conda’s base environment not be
activated on startup, set the to "false". Default is "true". This setting
always overrides if set to "true" or "false". If you want to use the
"condarc-file" setting pass and empty string. See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/
for more information.'
required: false
default: "true"
auto-update-conda:
description:
'Conda configuration. When "true", conda updates itself any time a user
updates or installs a package in the base environment. When "false", conda
updates itself only if the user manually issues a conda update command.
The default is "true". This setting always overrides if set to "true" or
"false". If you want to use the "condarc-file" setting pass and empty
string. See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/
for more information.'
required: false
default: "false"
condarc-file:
description:
'Conda configuration. Path to a conda configuration file to use for the
runner. This file will be copied to "~/.condarc". See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/
for more information.'
required: false
default: ""
channel-alias:
description:
"Conda configuration. Whenever you use the -c or --channel flag to give
conda a channel name that is not a URL, conda prepends the channel_alias
to the name that it was given. The default channel_alias is
https://conda.anaconda.org. See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#set-a-channel-alias-channel-alias
for more information."
required: false
default: ""
channel-priority:
description:
'Conda configuration. Accepts values of "strict", "flexible", and
"disabled". The default value is "flexible". With strict channel priority,
packages in lower priority channels are not considered if a package with
the same name appears in a higher priority channel. With flexible channel
priority, the solver may reach into lower priority channels to fulfill
dependencies, rather than raising an unsatisfiable error. With channel
priority disabled, package version takes precedence, and the configured
priority of channels is used only to break ties. In previous versions of
conda, this parameter was configured as either "true" or "false". "true"
is now an alias to "flexible". See
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-channels.html#strict-channel-priority
for more information.'
required: false
default: ""
channels:
description:
"Conda configuration. Comma separated list of channels to use in order of
priority. See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/
for more information."
required: false
default: ""
show-channel-urls:
description:
'Conda configuration. Show channel URLs when displaying what is going to
be downloaded and in conda list. The default is "false". See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#show-channel-urls-show-channel-urls
for more information.'
required: false
default: ""
use-only-tar-bz2:
description:
'Conda configuration. Conda 4.7 introduced a new .conda package file
format. .conda is a more compact and faster alternative to .tar.bz2
packages. It is thus the preferred file format to use where available.
Nevertheless, it is possible to force conda to only download .tar.bz2
packages by setting the use_only_tar_bz2 boolean to "true". The default is
"false". See
https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/use-condarc.html#force-conda-to-download-only-tar-bz2-packages-use-only-tar-bz2
for more information.'
required: false
default: ""
remove-profiles:
description:
'Advanced. Prior to runnning "conda init" all shell profiles will be
removed from the runner. Default is "true".'
required: false
default: "true"
mamba-version:
description:
'Experimental. Use mamba (https://github.com/QuantStack/mamba) as a faster
drop-in replacement for conda installs. Disabled by default. To enable,
use "*" or a "x.y" version string.'
required: false
default: ""
use-mamba:
description:
"Experimental. Use mamba as soon as available (either as provided by
`mamba-in-installer` or installation by `mamba-version`)"
required: false
default: ""
architecture:
description:
'Architecture of Miniconda that should be installed. Available options on
GitHub-hosted runners are "x86" and "x64". Default is "x64".'
required: false
default: "x64"
runs:
using: "node12"
main: "dist/setup/index.js"
post: "dist/delete/index.js"
post-if: "success()"
branding:
icon: "code"
color: "green"
Action ID: marketplace/dflook/checkout
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/checkout
Checkout a GitHub repository at a particular ref
| Name | Required | Description |
|---|---|---|
repository |
Optional | Repository name with owner. For example, actions/checkout Default: ${{ github.repository }} |
ref |
Optional | The branch, tag or SHA to checkout. When checking out the repository that triggered a workflow, this defaults to the reference or SHA for that event. Otherwise, uses the default branch. |
token |
Optional | Token to use when checking out the repository Default: ${{ github.token }} |
path |
Optional | Relative path under $GITHUB_WORKSPACE to place the repository Default: . |
| Name | Description |
|---|---|
ref |
The branch, tag or SHA that was checked out |
commit |
The commit SHA that was checked out |
name: 'Checkout'
description: 'Checkout a GitHub repository at a particular ref'
author: 'Daniel Flook'
inputs:
repository:
description: 'Repository name with owner. For example, actions/checkout'
default: ${{ github.repository }}
ref:
description: >
The branch, tag or SHA to checkout. When checking out the repository that
triggered a workflow, this defaults to the reference or SHA for that
event. Otherwise, uses the default branch.
token:
description: 'Token to use when checking out the repository'
default: ${{ github.token }}
path:
description: 'Relative path under $GITHUB_WORKSPACE to place the repository'
default: '.'
outputs:
ref:
description: 'The branch, tag or SHA that was checked out'
commit:
description: 'The commit SHA that was checked out'
runs:
using: docker
image: Dockerfile
entrypoint: /entrypoint.sh
Action ID: marketplace/actions/download-artifact
Author: GitHub
Publisher: actions
Repository: github.com/actions/download-artifact
Download a build artifact that was previously uploaded in the workflow by the upload-artifact action
| Name | Required | Description |
|---|---|---|
name |
Optional | Name of the artifact to download. If unspecified, all artifacts for the run are downloaded. |
artifact-ids |
Optional | IDs of the artifacts to download, comma-separated. Either inputs `artifact-ids` or `name` can be used, but not both. |
path |
Optional | Destination path. Supports basic tilde expansion. Defaults to $GITHUB_WORKSPACE |
pattern |
Optional | A glob pattern matching the artifacts that should be downloaded. Ignored if name is specified. |
merge-multiple |
Optional | When multiple artifacts are matched, this changes the behavior of the destination directories. If true, the downloaded artifacts will be in the same directory specified by path. If false, the downloaded artifacts will be extracted into individual named directories within the specified path. Default: false |
github-token |
Optional | The GitHub token used to authenticate with the GitHub API. This is required when downloading artifacts from a different repository or from a different workflow run. If this is not specified, the action will attempt to download artifacts from the current repository and the current workflow run. |
repository |
Optional | The repository owner and the repository name joined together by "/". If github-token is specified, this is the repository that artifacts will be downloaded from. Default: ${{ github.repository }} |
run-id |
Optional | The id of the workflow run where the desired download artifact was uploaded from. If github-token is specified, this is the run that artifacts will be downloaded from. Default: ${{ github.run_id }} |
| Name | Description |
|---|---|
download-path |
Path of artifact download |
name: 'Download a Build Artifact'
description: 'Download a build artifact that was previously uploaded in the workflow by the upload-artifact action'
author: 'GitHub'
inputs:
name:
description: 'Name of the artifact to download. If unspecified, all artifacts for the run are downloaded.'
required: false
artifact-ids:
description: 'IDs of the artifacts to download, comma-separated. Either inputs `artifact-ids` or `name` can be used, but not both.'
required: false
path:
description: 'Destination path. Supports basic tilde expansion. Defaults to $GITHUB_WORKSPACE'
required: false
pattern:
description: 'A glob pattern matching the artifacts that should be downloaded. Ignored if name is specified.'
required: false
merge-multiple:
description: 'When multiple artifacts are matched, this changes the behavior of the destination directories.
If true, the downloaded artifacts will be in the same directory specified by path.
If false, the downloaded artifacts will be extracted into individual named directories within the specified path.'
required: false
default: 'false'
github-token:
description: 'The GitHub token used to authenticate with the GitHub API.
This is required when downloading artifacts from a different repository or from a different workflow run.
If this is not specified, the action will attempt to download artifacts from the current repository and the current workflow run.'
required: false
repository:
description: 'The repository owner and the repository name joined together by "/".
If github-token is specified, this is the repository that artifacts will be downloaded from.'
required: false
default: ${{ github.repository }}
run-id:
description: 'The id of the workflow run where the desired download artifact was uploaded from.
If github-token is specified, this is the run that artifacts will be downloaded from.'
required: false
default: ${{ github.run_id }}
outputs:
download-path:
description: 'Path of artifact download'
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/AndreasAugustin/azure-login
Author: Unknown
Publisher: AndreasAugustin
Repository: github.com/AndreasAugustin/azure-login
Authenticate to Azure and run your Azure CLI or Azure PowerShell based actions or scripts.
| Name | Required | Description |
|---|---|---|
creds |
Optional | Paste output of `az ad sp create-for-rbac` as value of secret variable: AZURE_CREDENTIALS |
client-id |
Optional | ClientId of the Azure Service principal created. |
tenant-id |
Optional | TenantId of the Azure Service principal created. |
subscription-id |
Optional | Azure subscriptionId |
enable-AzPSSession |
Optional | Set this value to true to enable Azure PowerShell Login in addition to Azure CLI login |
environment |
Optional | Name of the environment. Supported values are azurecloud, azurestack, azureusgovernment, azurechinacloud, azuregermancloud. Default being azurecloud Default: azurecloud |
allow-no-subscriptions |
Optional | Set this value to true to enable support for accessing tenants without subscriptions |
audience |
Optional | Provide audience field for access-token. Default value is api://AzureADTokenExchange Default: api://AzureADTokenExchange |
auth-type |
Optional | The type of authentication. Supported values are SERVICE_PRINCIPAL, IDENTITY. Default value is SERVICE_PRINCIPAL Default: SERVICE_PRINCIPAL |
# Login to Azure subscription
name: 'Azure Login'
description: 'Authenticate to Azure and run your Azure CLI or Azure PowerShell based actions or scripts.'
inputs:
creds:
description: 'Paste output of `az ad sp create-for-rbac` as value of secret variable: AZURE_CREDENTIALS'
required: false
client-id:
description: 'ClientId of the Azure Service principal created.'
required: false
tenant-id:
description: 'TenantId of the Azure Service principal created.'
required: false
subscription-id:
description: 'Azure subscriptionId'
required: false
enable-AzPSSession:
description: 'Set this value to true to enable Azure PowerShell Login in addition to Azure CLI login'
required: false
default: false
environment:
description: 'Name of the environment. Supported values are azurecloud, azurestack, azureusgovernment, azurechinacloud, azuregermancloud. Default being azurecloud'
required: false
default: azurecloud
allow-no-subscriptions:
description: 'Set this value to true to enable support for accessing tenants without subscriptions'
required: false
default: false
audience:
description: 'Provide audience field for access-token. Default value is api://AzureADTokenExchange'
required: false
default: 'api://AzureADTokenExchange'
auth-type:
description: 'The type of authentication. Supported values are SERVICE_PRINCIPAL, IDENTITY. Default value is SERVICE_PRINCIPAL'
required: false
default: 'SERVICE_PRINCIPAL'
branding:
icon: 'login.svg'
color: 'blue'
runs:
using: 'node16'
pre: 'lib/cleanup.js'
main: 'lib/main.js'
post: 'lib/cleanup.js'
Action ID: marketplace/julia-actions/julia-uploadcoveralls
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-uploadcoveralls
Upload Julia test coverage results to codecov
name: 'Upload Julia coveralls results'
description: 'Upload Julia test coverage results to codecov'
author: 'David Anthoff'
branding:
icon: 'upload'
color: 'gray-dark'
runs:
using: 'composite'
steps:
- run: julia --color=yes -e 'using Pkg; Pkg.add("Coverage"); using Coverage; Coveralls.submit(process_folder())'
shell: bash
Action ID: marketplace/dflook/terraform-refresh
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-refresh
Refresh Terraform state
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the Terraform root module to refresh state for Default: . |
workspace |
Optional | Terraform workspace to run the refresh state in Default: default |
variables |
Optional | Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
target |
Optional | List of resources to target, one per line. The refresh will be limited to these resources and their dependencies. |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `refresh-failed` - The Terraform apply operation failed. - `state-locked` - The Terraform state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
run_id |
If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id. |
name: terraform-refresh
description: Refresh Terraform state
author: Daniel Flook
inputs:
path:
description: Path to the Terraform root module to refresh state for
required: false
default: "."
workspace:
description: Terraform workspace to run the refresh state in
required: false
default: "default"
variables:
description: |
Variables to set for the terraform plan. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
target:
description: |
List of resources to target, one per line.
The refresh will be limited to these resources and their dependencies.
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `refresh-failed` - The Terraform apply operation failed.
- `state-locked` - The Terraform state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
run_id:
description: If the root module uses the `remote` or `cloud` backend in remote execution mode, this output will be set to the remote run id.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/refresh.sh
branding:
icon: globe
color: purple
Action ID: marketplace/posener/learn
Author: Unknown
Publisher: posener
Repository: github.com/posener/learn
| Name | Required | Description |
|---|---|---|
addr |
Optional | Address to listen to Default: :8080 |
config |
Optional | Config file to load. Default: config.json |
# File generated by github.com/posener/goaction. DO NOT EDIT.
name: main
inputs:
addr:
default: :8080
description: "Address to listen to"
required: false
config:
default: config.json
description: "Config file to load."
required: false
runs:
using: docker
image: Dockerfile
args:
- "-addr=${{ inputs.addr }}"
- "-config=${{ inputs.config }}"
Action ID: marketplace/azure/container-apps-deploy-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/container-apps-deploy-action
'GitHub Action for building and deploying Azure Container Apps'
| Name | Required | Description |
|---|---|---|
appSourcePath |
Optional | Absolute path on the GitHub runner of the source application code to be built. |
acrName |
Optional | The name of the Azure Container Registry that the runnable application image will be pushed to. |
acrUsername |
Optional | 'The username used to authenticate push requests to the provided Azure Container Registry. If not provided, an access token will be generated via "az acr login" and provided to "docker login" to authenticate the requests.' |
acrPassword |
Optional | 'The password used to authenticate push requests to the provided Azure Container Registry. If not provided, an access token will be generated via "az acr login" and provided to "docker login" to authenticate the requests.' |
registryUrl |
Optional | The base URL of the Container Registry that the runnable application image will be pushed to. |
registryUsername |
Optional | The username used to authenticate push requests to the provided Container Registry using the "docker login" action. |
registryPassword |
Optional | The password used to authenticate push requests to the provided Container Registry using the "docker login" action. |
azureCredentials |
Optional | 'Azure credentials used by the `azure/login` action to authenticate Azure CLI requests if the user has not previously authenticated in the workflow calling this action.' |
imageToBuild |
Optional | 'The custom name of the image that is to be built, pushed to the Container Registry and deployed to the Container App by this action. Note: this image name should include the registry server; e.g., <registryUrl>/<repo>:<tag>. If this argument is not provided, a default image name will be constructed in the form of <acr-name>.azurecr.io/github-action/container-app:<github-run-id>.<github-run-attempt>.' |
imageToDeploy |
Optional | 'The custom name of an image that has already been pushed to the Container Registry and will be deployed to the Container App by this action. Note: this image name should include the registry server; e.g., <registryUrl>/<repo>:<tag>. If this argument is not provided, the value provided (or determined) for the "imageToBuild" argument will be used.' |
dockerfilePath |
Optional | 'Relative path to the Dockerfile in the provided application source that should be used to build the image that is then pushed to the Container Registry and deployed to the Container App. If not provided, this action will check if there is a file named "Dockerfile" in the provided application source and use that to build the image. Otherwise, the Oryx++ Builder will be used to create the image.' |
containerAppName |
Optional | 'The name of the Azure Container App that will be created or updated. If not provided, this value will be
gh-action-app-<github-run-id>-<github-run-attempt>.'
Default: gh-action-app-${{ github.run_id }}-${{ github.run_attempt }} |
resourceGroup |
Optional | 'The existing resource group that the Azure Container App will be created in. If not provided, this value will be <container-app-name>-rg and its existence will first be checked before attempting to create it.' |
containerAppEnvironment |
Optional | 'The name of the Azure Container App environment to use with the application. If not provided, an existing environment in the resource group of the Container App will be used, otherwise, an environment will be created in the format <container-app-name>-env' |
runtimeStack |
Optional | 'The platform version stack that the application runs in when deployed to the Azure Container App. This should be provided in the format <platform>:<version>. If not provided, this value is determined by Oryx based on the contents of the provided application. Please view the following document for more information on the supported runtime stacks for Oryx: https://github.com/microsoft/Oryx/blob/main/doc/supportedRuntimeVersions.md' |
builderStack |
Optional | 'The stack (OS) that should be used to build the provided application source and produce the runnable application image. You can provide a specific image tag for the stack, such as "debian-bullseye-20231107.2", or you can provide a supported stack name, such as "debian-bookworm" or "debian-bullseye", and the latest supported image tag for that stack will be used. If no stack is provided, this action will attempt to build the provided application source with each supported stack until there's a successful build.' |
buildArguments |
Optional | 'A list of build arguments provided as KEY=VALUE pairings and are space-separated. If a Dockerfile has been provided or is discovered in the application source, each build argument will be passed to the "docker build" command via the --build-arg flag. If the Oryx++ builder is used to create a runnable application image, each build argument will be passed to the "pack build" command via the --env flag.' |
targetPort |
Optional | 'The designated port for the application to run on. If no value is provided and the builder is used to build the runnable application image, the target port will be set to 80 for Python applications and 8080 for all other platform applications. If no value is provided when creating a Container App, the target port will be set to 80 by default. Note: when using this action to update a Container App, the target port may be updated if not provided based on changes to the ingress property. |
location |
Optional | 'The location that the Container App (and other created resources) will be deployed to. To view locations suitable for creating the Container App in, please run the following: az provider show -n Microsoft.App --query "resourceTypes[?resourceType=='containerApps'].locations"' |
environmentVariables |
Optional | 'A list of environment variable(s) for the container. Space-separated values in 'key=value' format. Empty string to clear existing values. Prefix value with 'secretref:' to reference a secret.' |
ingress |
Optional | 'Possible options: external, internal, disabled. If set to "external" (default value if not provided when creating a Container App), the Container App will be visible from the internet or a VNET, depending on the app environment endpoint configured. If set to "internal", the Container App will be visible from within the app environment only. If set to "disabled", ingress will be disabled for this Container App and will not have an HTTP or TCP endpoint.' |
yamlConfigPath |
Optional | 'Full path (on the executing GitHub runner) to the YAML file detailing the configuration of the Container App. The 'resourceGroup' property in the YAML file will not be used; the value for this either comes from the 'resourceGroup' argument provided to the action, or the default resource group name generated by the action. Image and application source arguments (e.g., 'appSourcePath', 'imageToDeploy') will still be used to first build and/or push an image that is used by the Container App; in this case, the provided YAML configuration file will need to reference the image specified by 'imageToDeploy' (or 'imageToBuild', depending on your scenario). When creating a new Container App, all properties listed in the file will be set when the Container App is created. When updating an existing Container App, only the properties listed in the file will be updated on the Container App. For more information on the structure of this YAML configuration file, please visit https://aka.ms/azure-container-apps-yaml' |
disableTelemetry |
Optional | 'If set to "true", no telemetry will be collected by this GitHub Action. If set to "false", or if this argument is
not provided, telemetry will be sent to Microsoft about the Container App build and deploy scenario targeted by
this GitHub Action.'
Default: false |
targetLabel |
Optional | 'Specifies the target label for the Azure Container App revision. This will replace any prior revision with the given label. This is required when using ActiveRevisionMode: Labels.' |
name: 'Azure Container Apps Build and Deploy'
description: |
'GitHub Action for building and deploying Azure Container Apps'
branding:
icon: "login.svg"
color: "blue"
inputs:
appSourcePath:
description: 'Absolute path on the GitHub runner of the source application code to be built.'
required: false
acrName:
description: 'The name of the Azure Container Registry that the runnable application image will be pushed to.'
required: false
acrUsername:
description: |
'The username used to authenticate push requests to the provided Azure Container Registry. If not provided, an
access token will be generated via "az acr login" and provided to "docker login" to authenticate the requests.'
required: false
acrPassword:
description: |
'The password used to authenticate push requests to the provided Azure Container Registry. If not provided, an
access token will be generated via "az acr login" and provided to "docker login" to authenticate the requests.'
required: false
registryUrl:
description: 'The base URL of the Container Registry that the runnable application image will be pushed to.'
required: false
registryUsername:
description: 'The username used to authenticate push requests to the provided Container Registry using the "docker login" action.'
required: false
registryPassword:
description: 'The password used to authenticate push requests to the provided Container Registry using the "docker login" action.'
required: false
azureCredentials:
description: |
'Azure credentials used by the `azure/login` action to authenticate Azure CLI requests if the user has not
previously authenticated in the workflow calling this action.'
required: false
imageToBuild:
description: |
'The custom name of the image that is to be built, pushed to the Container Registry and deployed to the Container App by this action.
Note: this image name should include the registry server; e.g., <registryUrl>/<repo>:<tag>. If this argument is
not provided, a default image name will be constructed in the form of
<acr-name>.azurecr.io/github-action/container-app:<github-run-id>.<github-run-attempt>.'
required: false
imageToDeploy:
description: |
'The custom name of an image that has already been pushed to the Container Registry and will be deployed to the Container App by this
action. Note: this image name should include the registry server; e.g., <registryUrl>/<repo>:<tag>. If this
argument is not provided, the value provided (or determined) for the "imageToBuild" argument will be used.'
required: false
dockerfilePath:
description: |
'Relative path to the Dockerfile in the provided application source that should be used to build the image that is
then pushed to the Container Registry and deployed to the Container App. If not provided, this action will check if there is a file
named "Dockerfile" in the provided application source and use that to build the image. Otherwise, the Oryx++
Builder will be used to create the image.'
required: false
containerAppName:
description: |
'The name of the Azure Container App that will be created or updated. If not provided, this value will be
gh-action-app-<github-run-id>-<github-run-attempt>.'
required: false
default: 'gh-action-app-${{ github.run_id }}-${{ github.run_attempt }}'
resourceGroup:
description: |
'The existing resource group that the Azure Container App will be created in. If not provided, this value will be
<container-app-name>-rg and its existence will first be checked before attempting to create it.'
required: false
containerAppEnvironment:
description: |
'The name of the Azure Container App environment to use with the application. If not provided, an existing
environment in the resource group of the Container App will be used, otherwise, an environment will be created in
the format <container-app-name>-env'
required: false
runtimeStack:
description: |
'The platform version stack that the application runs in when deployed to the Azure Container App. This should
be provided in the format <platform>:<version>. If not provided, this value is determined by Oryx based on the
contents of the provided application. Please view the following document for more information on the supported
runtime stacks for Oryx:
https://github.com/microsoft/Oryx/blob/main/doc/supportedRuntimeVersions.md'
required: false
builderStack:
description: |
'The stack (OS) that should be used to build the provided application source and produce the runnable application
image. You can provide a specific image tag for the stack, such as "debian-bullseye-20231107.2", or you can
provide a supported stack name, such as "debian-bookworm" or "debian-bullseye", and the latest supported image tag
for that stack will be used. If no stack is provided, this action will attempt to build the provided application
source with each supported stack until there's a successful build.'
required: false
buildArguments:
description: |
'A list of build arguments provided as KEY=VALUE pairings and are space-separated. If a Dockerfile has been
provided or is discovered in the application source, each build argument will be passed to the "docker build"
command via the --build-arg flag. If the Oryx++ builder is used to create a runnable application image, each
build argument will be passed to the "pack build" command via the --env flag.'
required: false
targetPort:
description: |
'The designated port for the application to run on. If no value is provided and the builder is used to build the
runnable application image, the target port will be set to 80 for Python applications and 8080 for all other
platform applications. If no value is provided when creating a Container App, the target port will be set to 80 by
default. Note: when using this action to update a Container App, the target port may be updated if not provided
based on changes to the ingress property.
required: false
location:
description: |
'The location that the Container App (and other created resources) will be deployed to. To view locations suitable
for creating the Container App in, please run the following: az provider show -n Microsoft.App --query "resourceTypes[?resourceType=='containerApps'].locations"'
required: false
environmentVariables:
description: |
'A list of environment variable(s) for the container. Space-separated values in 'key=value' format. Empty string
to clear existing values. Prefix value with 'secretref:' to reference a secret.'
required: false
ingress:
description: |
'Possible options: external, internal, disabled. If set to "external" (default value if not provided when creating
a Container App), the Container App will be visible from the internet or a VNET, depending on the app environment
endpoint configured. If set to "internal", the Container App will be visible from within the app environment only.
If set to "disabled", ingress will be disabled for this Container App and will not have an HTTP or TCP endpoint.'
required: false
yamlConfigPath:
description: |
'Full path (on the executing GitHub runner) to the YAML file detailing the configuration of the Container App.
The 'resourceGroup' property in the YAML file will not be used; the value for this either comes from the
'resourceGroup' argument provided to the action, or the default resource group name generated by the action.
Image and application source arguments (e.g., 'appSourcePath', 'imageToDeploy') will still be used to first build
and/or push an image that is used by the Container App; in this case, the provided YAML configuration file will
need to reference the image specified by 'imageToDeploy' (or 'imageToBuild', depending on your scenario). When
creating a new Container App, all properties listed in the file will be set when the Container App is created.
When updating an existing Container App, only the properties listed in the file will be updated on the Container
App. For more information on the structure of this YAML configuration file, please visit
https://aka.ms/azure-container-apps-yaml'
disableTelemetry:
description: |
'If set to "true", no telemetry will be collected by this GitHub Action. If set to "false", or if this argument is
not provided, telemetry will be sent to Microsoft about the Container App build and deploy scenario targeted by
this GitHub Action.'
required: false
default: 'false'
targetLabel:
description: |
'Specifies the target label for the Azure Container App revision.
This will replace any prior revision with the given label. This is required when using ActiveRevisionMode: Labels.'
required: false
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/julia-actions/composite-runtest
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/composite-runtest
Run the tests in a Julia package
| Name | Required | Description |
|---|---|---|
inline |
Optional | Value passed to the --inline flag. Options: yes | no. Default value: yes. Default: yes |
name: 'Run Julia package tests'
description: 'Run the tests in a Julia package'
inputs:
inline:
description: 'Value passed to the --inline flag. Options: yes | no. Default value: yes.'
default: 'yes'
runs:
using: "composite"
steps:
- run: julia --color=yes --check-bounds=yes --inline=${{ inputs.inline }} --project -e 'using Pkg; Pkg.test(coverage=true)'
shell: bash
Action ID: marketplace/skx/github-action-tester
Author: Steve Kemp
Publisher: skx
Repository: github.com/skx/github-action-tester
Run tests when pull-requests are opened, or commits pushed.
| Name | Required | Description |
|---|---|---|
script |
Optional | The path to the test-script to run, within the repository. Default: .github/run-tests.sh |
name: 'github-action-tester'
description: 'Run tests when pull-requests are opened, or commits pushed.'
author: 'Steve Kemp'
branding:
icon: eye
color: black
inputs:
script:
description: 'The path to the test-script to run, within the repository.'
default: '.github/run-tests.sh'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.script }}
Action ID: marketplace/creyD/crush_action
Author: Conrad Großer <grosserconrad@gmail.com>
Publisher: creyD
Repository: github.com/creyD/crush_action
Losslessly reduces file size for PNG and JPG
| Name | Required | Description |
|---|---|---|
commit_message |
Optional | Commit message Default: Optimised your images! |
commit_options |
Optional | Commit options |
file_pattern |
Optional | File pattern used for `git add` Default: * |
branch |
Required | Target branch for the changes |
folder |
Optional | Starting folder for recursive search Default: . |
name: Crush Action
description: Losslessly reduces file size for PNG and JPG
author: Conrad Großer <grosserconrad@gmail.com>
inputs:
commit_message:
description: Commit message
required: false
default: 'Optimised your images!'
commit_options:
description: Commit options
required: false
file_pattern:
description: File pattern used for `git add`
required: false
default: '*'
branch:
description: Target branch for the changes
required: true
folder:
description: Starting folder for recursive search
required: false
default: '.'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'aperture'
color: 'green'
Builds toolkit for VDC.
name: 'Build and deploy the VDC toolkit'
description: 'Builds toolkit for VDC.'
author: 'Jack '
runs:
using: 'docker'
image: 'dockerfile'
branding:
color: red
icon: flag
Action ID: marketplace/appleboy/trufflehog
Author: Truffle Security Co. <support@trufflesec.com>
Publisher: appleboy
Repository: github.com/appleboy/trufflehog
Scan Github Actions with TruffleHog.
| Name | Required | Description |
|---|---|---|
path |
Optional | Repository path Default: ./ |
base |
Optional | Start scanning from here (usually main branch). |
head |
Optional | Scan commits until here (usually dev branch). |
extra_args |
Optional | Extra args to be passed to the trufflehog cli. |
version |
Optional | Scan with this trufflehog cli version. Default: latest |
name: "TruffleHog OSS"
description: "Scan Github Actions with TruffleHog."
author: Truffle Security Co. <support@trufflesec.com>
inputs:
path:
description: Repository path
required: false
default: "./"
base:
description: Start scanning from here (usually main branch).
required: false
default: ""
head:
description: Scan commits until here (usually dev branch).
required: false
extra_args:
default: ""
description: Extra args to be passed to the trufflehog cli.
required: false
version:
default: "latest"
description: Scan with this trufflehog cli version.
required: false
branding:
icon: "shield"
color: "green"
runs:
using: "composite"
steps:
- shell: bash
working-directory: ${{ inputs.path }}
env:
BASE: ${{ inputs.base }}
HEAD: ${{ inputs.head }}
ARGS: ${{ inputs.extra_args }}
COMMIT_IDS: ${{ toJson(github.event.commits.*.id) }}
VERSION: ${{ inputs.version }}
run: |
##########################################
## ADVANCED USAGE ##
## Scan by BASE & HEAD user inputs ##
## If BASE == HEAD, exit with error ##
##########################################
# Check if jq is installed, if not, install it
if ! command -v jq &> /dev/null
then
echo "jq could not be found, installing..."
apt-get -y update && apt-get install -y jq
fi
git status >/dev/null # make sure we are in a git repository
if [ -n "$BASE" ] || [ -n "$HEAD" ]; then
if [ -n "$BASE" ]; then
base_commit=$(git rev-parse "$BASE" 2>/dev/null) || true
else
base_commit=""
fi
if [ -n "$HEAD" ]; then
head_commit=$(git rev-parse "$HEAD" 2>/dev/null) || true
else
head_commit=""
fi
if [ "$base_commit" == "$head_commit" ] ; then
echo "::error::BASE and HEAD commits are the same. TruffleHog won't scan anything. Please see documentation (https://github.com/trufflesecurity/trufflehog#octocat-trufflehog-github-action)."
exit 1
fi
##########################################
## Scan commits based on event type ##
##########################################
else
if [ "${{ github.event_name }}" == "push" ]; then
COMMIT_LENGTH=$(printenv COMMIT_IDS | jq length)
if [ $COMMIT_LENGTH == "0" ]; then
echo "No commits to scan"
exit 0
fi
HEAD=${{ github.event.after }}
if [ ${{ github.event.before }} == "0000000000000000000000000000000000000000" ]; then
BASE=""
else
BASE=${{ github.event.before }}
fi
elif [ "${{ github.event_name }}" == "workflow_dispatch" ] || [ "${{ github.event_name }}" == "schedule" ]; then
BASE=""
HEAD=""
elif [ "${{ github.event_name }}" == "pull_request" ]; then
BASE=${{github.event.pull_request.base.sha}}
HEAD=${{github.event.pull_request.head.sha}}
fi
fi
##########################################
## Run TruffleHog ##
##########################################
docker run --rm -v .:/tmp -w /tmp \
ghcr.io/trufflesecurity/trufflehog:${VERSION} \
git file:///tmp/ \
--since-commit \
${BASE:-''} \
--branch \
${HEAD:-''} \
--fail \
--no-update \
--github-actions \
${ARGS:-''}
Action ID: marketplace/actions/upload-pages-artifact
Author: GitHub
Publisher: actions
Repository: github.com/actions/upload-pages-artifact
A composite action that prepares your static assets to be deployed to GitHub Pages
| Name | Required | Description |
|---|---|---|
name |
Optional | Artifact name Default: github-pages |
path |
Required | Path of the directory containing the static assets. Default: _site/ |
retention-days |
Optional | Duration after which artifact will expire in days. Default: 1 |
| Name | Description |
|---|---|
artifact_id |
The ID of the artifact that was uploaded. |
name: "Upload GitHub Pages artifact"
description: "A composite action that prepares your static assets to be deployed to GitHub Pages"
author: "GitHub"
inputs:
name:
description: 'Artifact name'
required: false
default: 'github-pages'
path:
description: "Path of the directory containing the static assets."
required: true
default: "_site/"
retention-days:
description: "Duration after which artifact will expire in days."
required: false
default: "1"
outputs:
artifact_id:
description: "The ID of the artifact that was uploaded."
value: ${{ steps.upload-artifact.outputs.artifact-id }}
runs:
using: composite
steps:
- name: Archive artifact
shell: sh
if: runner.os == 'Linux'
run: |
echo ::group::Archive artifact
tar \
--dereference --hard-dereference \
--directory "$INPUT_PATH" \
-cvf "$RUNNER_TEMP/artifact.tar" \
--exclude=.git \
--exclude=.github \
--exclude=".[^/]*" \
.
echo ::endgroup::
env:
INPUT_PATH: ${{ inputs.path }}
# Switch to gtar (GNU tar instead of bsdtar which is the default in the MacOS runners so we can use --hard-dereference)
- name: Archive artifact
shell: sh
if: runner.os == 'macOS'
run: |
echo ::group::Archive artifact
gtar \
--dereference --hard-dereference \
--directory "$INPUT_PATH" \
-cvf "$RUNNER_TEMP/artifact.tar" \
--exclude=.git \
--exclude=.github \
--exclude=".[^/]*" \
.
echo ::endgroup::
env:
INPUT_PATH: ${{ inputs.path }}
# Massage the paths for Windows only
- name: Archive artifact
shell: bash
if: runner.os == 'Windows'
run: |
echo ::group::Archive artifact
tar \
--dereference --hard-dereference \
--directory "$INPUT_PATH" \
-cvf "$RUNNER_TEMP\artifact.tar" \
--exclude=.git \
--exclude=.github \
--exclude=".[^/]*" \
--force-local \
"."
echo ::endgroup::
env:
INPUT_PATH: ${{ inputs.path }}
- name: Upload artifact
id: upload-artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: ${{ inputs.name }}
path: ${{ runner.temp }}/artifact.tar
retention-days: ${{ inputs.retention-days }}
if-no-files-found: error
Action ID: marketplace/dineshsonachalam/action-my-markdown-link-checker
Author: ruzickap
Publisher: dineshsonachalam
Repository: github.com/dineshsonachalam/action-my-markdown-link-checker
Check markdown files for broken links
| Name | Required | Description |
|---|---|---|
config_file |
Optional | markdown-link-check config file |
debug |
Optional | Debug mode |
exclude |
Optional | Exclude files or directories |
fd_cmd_params |
Optional | Command line parameters for fd command. "exclude" and "search_paths" parameters are ignored if this is set |
quiet |
Optional | Quiet mode for markdown-link-check |
search_paths |
Optional | Set paths which should be recursively scanned for markdown files (*.md) and linted. By default all "*.md" files are checked |
verbose |
Optional | Verbose mode for markdown-link-check |
name: 'My Markdown Link Checker'
description: 'Check markdown files for broken links'
author: 'ruzickap'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'list'
color: 'blue'
inputs:
config_file:
description: 'markdown-link-check config file'
debug:
description: 'Debug mode'
exclude:
description: 'Exclude files or directories'
fd_cmd_params:
description: 'Command line parameters for fd command. "exclude" and "search_paths" parameters are ignored if this is set'
quiet:
description: 'Quiet mode for markdown-link-check'
search_paths:
description: 'Set paths which should be recursively scanned for markdown files (*.md) and linted. By default all "*.md" files are checked'
verbose:
description: 'Verbose mode for markdown-link-check'
Action ID: marketplace/julia-actions/RegisterAction
Author: Chris de Graaf
Publisher: julia-actions
Repository: github.com/julia-actions/RegisterAction
Register your Julia package with Registrator
| Name | Required | Description |
|---|---|---|
token |
Required | GitHub API token |
registrator |
Optional | Registrator bot username Default: JuliaRegistrator |
name: Register Julia Package
author: Chris de Graaf
description: Register your Julia package with Registrator
inputs:
token:
description: GitHub API token
required: true
registrator:
description: Registrator bot username
default: JuliaRegistrator
runs:
using: node16
main: register.js
branding:
icon: package
color: red
Action ID: marketplace/mheap/require-checklist-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/require-checklist-action
Ensure that any checklists in an issue/pull request are completed
| Name | Required | Description |
|---|---|---|
token |
Optional | The GitHub API token to use Default: ${{ github.token }} |
requireChecklist |
Optional | Require a checklist to exist Default: false |
skipComments |
Optional | Do not look for checklists in comments Default: false |
skipDescriptionRegex |
Optional | A regex pattern of descriptions that will be skipped if matched Default: undefined |
name: Require Checklist
description: Ensure that any checklists in an issue/pull request are completed
runs:
using: docker
image: Dockerfile
branding:
icon: check-square
color: gray-dark
inputs:
token:
description: The GitHub API token to use
default: ${{ github.token }}
required: false
requireChecklist:
description: Require a checklist to exist
required: false
default: "false"
skipComments:
description: Do not look for checklists in comments
required: false
default: "false"
skipDescriptionRegex:
description: A regex pattern of descriptions that will be skipped if matched
required: false
default: undefined
Action ID: marketplace/github/command
Author: Grant Birkinbine
Publisher: github
Repository: github.com/github/command
IssueOps commands in GitHub Actions
| Name | Required | Description |
|---|---|---|
github_token |
Required | The GitHub token used to create an authenticated client - Provided for you by default! Default: ${{ github.token }} |
status |
Required | The status of the GitHub Actions - For use in the post run workflow - Provided for you by default! Default: ${{ job.status }} |
command |
Required | The string to look for in comments as an IssueOps trigger/command. Example: ".lint" |
reaction |
Required | If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket" or "eyes" Default: eyes |
success_reaction |
Required | The reaction to add to the comment that triggered the Action if its execution was successful Default: +1 |
failure_reaction |
Required | The reaction to add to the comment that triggered the Action if its execution failed Default: -1 |
allowed_contexts |
Required | A comma separated list of comment contexts that are allowed to trigger this IssueOps command. Pull requests and issues are the only currently supported contexts Default: pull_request |
permissions |
Required | The allowed GitHub permissions an actor can have to invoke IssueOps commands - Example: "write,admin" Default: write,admin |
allow_drafts |
Required | Whether or not to allow this IssueOps command to be run on draft pull requests Default: false |
allow_forks |
Required | Whether or not to allow this IssueOps command to be run on forked pull requests Default: false |
skip_ci |
Required | Whether or not to require passing CI checks before this IssueOps command can be run Default: false |
skip_reviews |
Required | Whether or not to require reviews before this IssueOps command can be run Default: false |
param_separator |
Required | The separator to use for parsing parameters in comments in IssueOps commands. Parameters will are saved as outputs and can be used in subsequent steps Default: | |
allowlist |
Optional | A comma separated list of GitHub usernames or teams that should be allowed to use the IssueOps commands configured in this Action. If unset, then all users meeting the "permissions" requirement will be able to run commands. Example: "monalisa,octocat,my-org/my-team" Default: false |
allowlist_pat |
Optional | A GitHub personal access token with "read:org" scopes. This is only needed if you are using the "allowlist" option with a GitHub org team. For example: "my-org/my-team" Default: false |
skip_completing |
Required | If set to "true", skip the process of completing the Action. This is useful if you want to customize the way this Action completes - For example, custom reactions, comments, etc Default: false |
fork_review_bypass |
Required | If set to "true", allow forks to bypass the review requirement if the operation is being made on a pull request from a fork. This option is potentially dangerous if you are checking out code in your workflow as a result of invoking this Action. If the code you are checking out has not been reviewed, then you might open yourself up to a TOCTOU vulnerability. You should always ensure that the code you are checking out has been reviewed, and that you checkout an exact commit sha rather than a ref. Default: false |
| Name | Description |
|---|---|
triggered |
The string "true" if the trigger was found, otherwise the string "false" - Just because the workflow was triggered does not mean it should continue. This is a step 1/2 check |
continue |
The string "true" if the workflow should continue, otherwise empty - Use this to conditionally control if your workflow should proceed or not. This is a step 2/2 check |
comment_body |
The comment body |
actor |
The GitHub handle of the actor that invoked the IssueOps command |
params |
The raw parameters that were passed into the IssueOps command |
comment_id |
The comment id which triggered this action |
issue_number |
The issue number which this Action was triggered on |
initial_reaction_id |
The reaction id for the initial reaction on the trigger comment |
fork |
The string "true" if the pull request is a fork, otherwise "false" |
fork_ref |
The true ref of the fork |
fork_label |
The API label field returned for the fork |
fork_checkout |
The console command presented in the GitHub UI to checkout a given fork locally |
fork_full_name |
The full name of the fork in "org/repo" format |
sha |
The commit sha if being used in the context of a pull request |
ref |
The ref if being used in the context of a pull request |
base_ref |
The base ref that the pull request is merging into (if available and run in the context of a pull request) |
name: "command-action"
description: "IssueOps commands in GitHub Actions"
author: "Grant Birkinbine"
branding:
icon: 'play'
color: 'gray-dark'
inputs:
github_token:
description: The GitHub token used to create an authenticated client - Provided for you by default!
default: ${{ github.token }}
required: true
status:
description: The status of the GitHub Actions - For use in the post run workflow - Provided for you by default!
default: ${{ job.status }}
required: true
command:
description: 'The string to look for in comments as an IssueOps trigger/command. Example: ".lint"'
required: true
reaction:
description: 'If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket" or "eyes"'
required: true
default: "eyes"
success_reaction:
description: 'The reaction to add to the comment that triggered the Action if its execution was successful'
required: true
default: "+1"
failure_reaction:
description: 'The reaction to add to the comment that triggered the Action if its execution failed'
required: true
default: "-1"
allowed_contexts:
description: 'A comma separated list of comment contexts that are allowed to trigger this IssueOps command. Pull requests and issues are the only currently supported contexts'
required: true
default: "pull_request"
permissions:
description: 'The allowed GitHub permissions an actor can have to invoke IssueOps commands - Example: "write,admin"'
required: true
default: "write,admin"
allow_drafts:
description: 'Whether or not to allow this IssueOps command to be run on draft pull requests'
required: true
default: "false"
allow_forks:
description: 'Whether or not to allow this IssueOps command to be run on forked pull requests'
required: true
default: "false"
skip_ci:
description: 'Whether or not to require passing CI checks before this IssueOps command can be run'
required: true
default: "false"
skip_reviews:
description: 'Whether or not to require reviews before this IssueOps command can be run'
required: true
default: "false"
param_separator:
description: 'The separator to use for parsing parameters in comments in IssueOps commands. Parameters will are saved as outputs and can be used in subsequent steps'
required: true
default: "|"
allowlist:
description: 'A comma separated list of GitHub usernames or teams that should be allowed to use the IssueOps commands configured in this Action. If unset, then all users meeting the "permissions" requirement will be able to run commands. Example: "monalisa,octocat,my-org/my-team"'
required: false
default: "false"
allowlist_pat:
description: 'A GitHub personal access token with "read:org" scopes. This is only needed if you are using the "allowlist" option with a GitHub org team. For example: "my-org/my-team"'
required: false
default: "false"
skip_completing:
description: 'If set to "true", skip the process of completing the Action. This is useful if you want to customize the way this Action completes - For example, custom reactions, comments, etc'
required: true
default: "false"
fork_review_bypass:
description: 'If set to "true", allow forks to bypass the review requirement if the operation is being made on a pull request from a fork. This option is potentially dangerous if you are checking out code in your workflow as a result of invoking this Action. If the code you are checking out has not been reviewed, then you might open yourself up to a TOCTOU vulnerability. You should always ensure that the code you are checking out has been reviewed, and that you checkout an exact commit sha rather than a ref.'
required: true
default: "false"
outputs:
triggered:
description: 'The string "true" if the trigger was found, otherwise the string "false" - Just because the workflow was triggered does not mean it should continue. This is a step 1/2 check'
continue:
description: 'The string "true" if the workflow should continue, otherwise empty - Use this to conditionally control if your workflow should proceed or not. This is a step 2/2 check'
comment_body:
description: The comment body
actor:
description: The GitHub handle of the actor that invoked the IssueOps command
params:
description: The raw parameters that were passed into the IssueOps command
comment_id:
description: The comment id which triggered this action
issue_number:
description: The issue number which this Action was triggered on
initial_reaction_id:
description: The reaction id for the initial reaction on the trigger comment
fork:
description: 'The string "true" if the pull request is a fork, otherwise "false"'
fork_ref:
description: 'The true ref of the fork'
fork_label:
description: 'The API label field returned for the fork'
fork_checkout:
description: 'The console command presented in the GitHub UI to checkout a given fork locally'
fork_full_name:
description: 'The full name of the fork in "org/repo" format'
sha:
description: 'The commit sha if being used in the context of a pull request'
ref:
description: 'The ref if being used in the context of a pull request'
base_ref:
description: The base ref that the pull request is merging into (if available and run in the context of a pull request)
runs:
using: "node20"
main: "dist/index.js"
post: "dist/index.js"
Action ID: marketplace/peter-evans/create-pull-request
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/create-pull-request
Creates a pull request for changes to your repository in the actions workspace
| Name | Required | Description |
|---|---|---|
token |
Optional | The token that the action will use to create and update the pull request. Default: ${{ github.token }} |
branch-token |
Optional | The token that the action will use to create and update the branch. Defaults to the value of `token`. |
path |
Optional | Relative path under $GITHUB_WORKSPACE to the repository. Defaults to $GITHUB_WORKSPACE. |
add-paths |
Optional | A comma or newline-separated list of file paths to commit. Paths should follow git's pathspec syntax. Defaults to adding all new and modified files. |
commit-message |
Optional | The message to use when committing changes. Default: [create-pull-request] automated change |
committer |
Optional | The committer name and email address in the format `Display Name <email@address.com>`. Defaults to the GitHub Actions bot user.
Default: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> |
author |
Optional | The author name and email address in the format `Display Name <email@address.com>`. Defaults to the user who triggered the workflow run.
Default: ${{ github.actor }} <${{ github.actor_id }}+${{ github.actor }}@users.noreply.github.com> |
signoff |
Optional | Add `Signed-off-by` line by the committer at the end of the commit log message. |
branch |
Optional | The pull request branch name. Default: create-pull-request/patch |
delete-branch |
Optional | Delete the `branch` if it doesn't have an active pull request associated with it. |
branch-suffix |
Optional | The branch suffix type when using the alternative branching strategy. |
base |
Optional | The pull request base branch. Defaults to the branch checked out in the workflow. |
push-to-fork |
Optional | A fork of the checked out parent repository to which the pull request branch will be pushed. e.g. `owner/repo-fork`. The pull request will be created to merge the fork's branch into the parent's base. |
sign-commits |
Optional | Sign commits as `github-actions[bot]` when using `GITHUB_TOKEN`, or your own bot when using GitHub App tokens. |
title |
Optional | The title of the pull request. Default: Changes by create-pull-request action |
body |
Optional | The body of the pull request. Default: Automated changes by [create-pull-request](https://github.com/peter-evans/create-pull-request) GitHub action |
body-path |
Optional | The path to a file containing the pull request body. Takes precedence over `body`. |
labels |
Optional | A comma or newline separated list of labels. |
assignees |
Optional | A comma or newline separated list of assignees (GitHub usernames). |
reviewers |
Optional | A comma or newline separated list of reviewers (GitHub usernames) to request a review from. |
team-reviewers |
Optional | A comma or newline separated list of GitHub teams to request a review from. Note that a `repo` scoped Personal Access Token (PAT) may be required. |
milestone |
Optional | The number of the milestone to associate the pull request with. |
draft |
Optional | Create a draft pull request. Valid values are `true` (only on create), `always-true` (on create and update), and `false`. |
maintainer-can-modify |
Optional | Indicates whether maintainers can modify the pull request. Default: True |
| Name | Description |
|---|---|
pull-request-number |
The pull request number |
pull-request-url |
The URL of the pull request. |
pull-request-operation |
The pull request operation performed by the action, `created`, `updated` or `closed`. |
pull-request-head-sha |
The commit SHA of the pull request branch. |
pull-request-branch |
The pull request branch name |
name: 'Create Pull Request'
description: 'Creates a pull request for changes to your repository in the actions workspace'
inputs:
token:
description: 'The token that the action will use to create and update the pull request.'
default: ${{ github.token }}
branch-token:
description: >
The token that the action will use to create and update the branch.
Defaults to the value of `token`.
path:
description: >
Relative path under $GITHUB_WORKSPACE to the repository.
Defaults to $GITHUB_WORKSPACE.
add-paths:
description: >
A comma or newline-separated list of file paths to commit.
Paths should follow git's pathspec syntax.
Defaults to adding all new and modified files.
commit-message:
description: 'The message to use when committing changes.'
default: '[create-pull-request] automated change'
committer:
description: >
The committer name and email address in the format `Display Name <email@address.com>`.
Defaults to the GitHub Actions bot user.
default: 'github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>'
author:
description: >
The author name and email address in the format `Display Name <email@address.com>`.
Defaults to the user who triggered the workflow run.
default: '${{ github.actor }} <${{ github.actor_id }}+${{ github.actor }}@users.noreply.github.com>'
signoff:
description: 'Add `Signed-off-by` line by the committer at the end of the commit log message.'
default: false
branch:
description: 'The pull request branch name.'
default: 'create-pull-request/patch'
delete-branch:
description: >
Delete the `branch` if it doesn't have an active pull request associated with it.
default: false
branch-suffix:
description: 'The branch suffix type when using the alternative branching strategy.'
base:
description: >
The pull request base branch.
Defaults to the branch checked out in the workflow.
push-to-fork:
description: >
A fork of the checked out parent repository to which the pull request branch will be pushed.
e.g. `owner/repo-fork`.
The pull request will be created to merge the fork's branch into the parent's base.
sign-commits:
description: 'Sign commits as `github-actions[bot]` when using `GITHUB_TOKEN`, or your own bot when using GitHub App tokens.'
default: false
title:
description: 'The title of the pull request.'
default: 'Changes by create-pull-request action'
body:
description: 'The body of the pull request.'
default: 'Automated changes by [create-pull-request](https://github.com/peter-evans/create-pull-request) GitHub action'
body-path:
description: 'The path to a file containing the pull request body. Takes precedence over `body`.'
labels:
description: 'A comma or newline separated list of labels.'
assignees:
description: 'A comma or newline separated list of assignees (GitHub usernames).'
reviewers:
description: 'A comma or newline separated list of reviewers (GitHub usernames) to request a review from.'
team-reviewers:
description: >
A comma or newline separated list of GitHub teams to request a review from.
Note that a `repo` scoped Personal Access Token (PAT) may be required.
milestone:
description: 'The number of the milestone to associate the pull request with.'
draft:
description: >
Create a draft pull request.
Valid values are `true` (only on create), `always-true` (on create and update), and `false`.
default: false
maintainer-can-modify:
description: 'Indicates whether maintainers can modify the pull request.'
default: true
outputs:
pull-request-number:
description: 'The pull request number'
pull-request-url:
description: 'The URL of the pull request.'
pull-request-operation:
description: 'The pull request operation performed by the action, `created`, `updated` or `closed`.'
pull-request-head-sha:
description: 'The commit SHA of the pull request branch.'
pull-request-branch:
description: 'The pull request branch name'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'git-pull-request'
color: 'gray-dark'
Action ID: marketplace/mheap/github-action-pokemon-comment
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-pokemon-comment
Adds a comment thanking the author when a PR is opened, with bonus pokemon information
name: PR-kemon
description: Adds a comment thanking the author when a PR is opened, with bonus pokemon information
runs:
using: docker
image: Dockerfile
branding:
icon: message-circle
color: red
Action ID: marketplace/github/actions-oidc-debugger
Author: Unknown
Publisher: github
Repository: github.com/github/actions-oidc-debugger
Print the GitHub Actions OIDC claims.
| Name | Required | Description |
|---|---|---|
audience |
Required | The audience to use when requesting the JWT. Your Github server URL and repository owner (e.g. https://github.com/github). |
name: 'OIDC Debugger'
description: 'Print the GitHub Actions OIDC claims.'
branding:
icon: 'activity'
color: 'red'
inputs:
audience:
description: 'The audience to use when requesting the JWT. Your Github server URL and repository owner (e.g. https://github.com/github).'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.audience }}
Action ID: marketplace/docker/bake-action
Author: docker
Publisher: docker
Repository: github.com/docker/bake-action
GitHub Action to use Docker Buildx Bake as a high-level build command
| Name | Required | Description |
|---|---|---|
builder |
Optional | Builder instance |
workdir |
Optional | Working directory of bake execution Default: . |
source |
Optional | Context to build from. Can be either local or a remote bake definition |
allow |
Optional | Allow build to access specified resources (e.g., network.host) |
call |
Optional | Set method for evaluating build (e.g., check) |
files |
Optional | List of bake definition files |
no-cache |
Optional | Do not use cache when building the image Default: false |
pull |
Optional | Always attempt to pull a newer version of the image Default: false |
load |
Optional | Load is a shorthand for --set=*.output=type=docker Default: false |
provenance |
Optional | Provenance is a shorthand for --set=*.attest=type=provenance |
push |
Optional | Push is a shorthand for --set=*.output=type=registry Default: false |
sbom |
Optional | SBOM is a shorthand for --set=*.attest=type=sbom |
set |
Optional | List of targets values to override (eg. targetpattern.key=value) |
targets |
Optional | List of bake targets |
github-token |
Optional | API token used to authenticate to a Git repository for remote definitions Default: ${{ github.token }} |
| Name | Description |
|---|---|
metadata |
Build result metadata |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: "Docker Buildx Bake"
description: "GitHub Action to use Docker Buildx Bake as a high-level build command"
author: 'docker'
branding:
icon: 'anchor'
color: 'blue'
inputs:
builder:
description: "Builder instance"
required: false
workdir:
description: "Working directory of bake execution"
required: false
default: '.'
source:
description: "Context to build from. Can be either local or a remote bake definition"
required: false
allow:
description: "Allow build to access specified resources (e.g., network.host)"
required: false
call:
description: "Set method for evaluating build (e.g., check)"
required: false
files:
description: "List of bake definition files"
required: false
no-cache:
description: "Do not use cache when building the image"
required: false
default: 'false'
pull:
description: "Always attempt to pull a newer version of the image"
required: false
default: 'false'
load:
description: "Load is a shorthand for --set=*.output=type=docker"
required: false
default: 'false'
provenance:
description: "Provenance is a shorthand for --set=*.attest=type=provenance"
required: false
push:
description: "Push is a shorthand for --set=*.output=type=registry"
required: false
default: 'false'
sbom:
description: "SBOM is a shorthand for --set=*.attest=type=sbom"
required: false
set:
description: "List of targets values to override (eg. targetpattern.key=value)"
required: false
targets:
description: "List of bake targets"
required: false
github-token:
description: "API token used to authenticate to a Git repository for remote definitions"
default: ${{ github.token }}
required: false
outputs:
metadata:
description: 'Build result metadata'
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/jidicula/go-action
Author: devigned
Publisher: jidicula
Repository: github.com/jidicula/go-action
Simple action to send a tweet via an GitHub Action.
| Name | Required | Description |
|---|---|---|
message |
Required | message you want to tweet |
apiKey |
Required | api key for Twitter api |
apiKeySecret |
Required | api key secret for Twitter api |
accessToken |
Required | access token for Twitter api |
accessTokenSecret |
Required | access token secret for Twitter api |
| Name | Description |
|---|---|
errorMessage |
if something went wrong, the error message |
sentMessage |
message sent to Twitter |
name: Tweeter Action
author: devigned
description: Simple action to send a tweet via an GitHub Action.
inputs:
message:
description: 'message you want to tweet'
required: true
apiKey:
description: 'api key for Twitter api'
required: true
apiKeySecret:
description: 'api key secret for Twitter api'
required: true
accessToken:
description: 'access token for Twitter api'
required: true
accessTokenSecret:
description: 'access token secret for Twitter api'
required: true
outputs:
errorMessage:
description: 'if something went wrong, the error message'
sentMessage:
description: 'message sent to Twitter'
runs:
using: docker
image: Dockerfile
# using: docker
# image: docker://ghcr.io/the-gophers/go-action:1.0.0
args:
- --message
- "${{ inputs.message }}"
- --apiKey
- ${{ inputs.apiKey }}
- --apiKeySecret
- ${{ inputs.apiKeySecret }}
- --accessToken
- ${{ inputs.accessToken }}
- --accessTokenSecret
- ${{ inputs.accessTokenSecret }}
Action ID: marketplace/peter-evans/repository-dispatch
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/repository-dispatch
Create a repository dispatch event
| Name | Required | Description |
|---|---|---|
token |
Optional | GITHUB_TOKEN or a `repo` scoped Personal Access Token (PAT) Default: ${{ github.token }} |
repository |
Optional | The full name of the repository to send the dispatch. Default: ${{ github.repository }} |
event-type |
Required | A custom webhook event name. |
client-payload |
Optional | JSON payload with extra information about the webhook event that your action or worklow may use. Default: {} |
name: 'Repository Dispatch'
description: 'Create a repository dispatch event'
inputs:
token:
description: 'GITHUB_TOKEN or a `repo` scoped Personal Access Token (PAT)'
default: ${{ github.token }}
repository:
description: 'The full name of the repository to send the dispatch.'
default: ${{ github.repository }}
event-type:
description: 'A custom webhook event name.'
required: true
client-payload:
description: 'JSON payload with extra information about the webhook event that your action or worklow may use.'
default: '{}'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'target'
color: 'gray-dark'
Action ID: marketplace/calibreapp/image-actions
Author: Calibre
Publisher: calibreapp
Repository: github.com/calibreapp/image-actions
Compresses Images for the Web
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Optional | The token that the action will use to create and update the pull request. Default: ${{ github.token }} |
jpegQuality |
Optional | JPEG quality level Default: 85 |
jpegProgressive |
Optional | Use progressive (interlaced) scan for JPEG Default: false |
pngQuality |
Optional | PNG quality level Default: 80 |
webpQuality |
Optional | WEBP quality level Default: 85 |
avifQuality |
Optional | AVIF quality level Default: 75 |
ignorePaths |
Optional | Paths to ignore during search Default: node_modules/** |
compressOnly |
Optional | Images will be compressed. No commit, or comments will be added to your Pull Request Default: false |
minPctChange |
Optional | Minimun percentage reduction to be committed Default: 5 |
| Name | Description |
|---|---|
markdown |
Output param used to store the Markdown summary for subsequent actions to use |
name: "Image Actions"
author: "Calibre"
description: "Compresses Images for the Web"
inputs:
GITHUB_TOKEN:
description: "The token that the action will use to create and update the pull request."
default: ${{ github.token }}
jpegQuality:
description: "JPEG quality level"
required: false
default: "85"
jpegProgressive:
description: "Use progressive (interlaced) scan for JPEG"
required: false
default: "false"
pngQuality:
description: "PNG quality level"
required: false
default: "80"
webpQuality:
description: "WEBP quality level"
required: false
default: "85"
avifQuality:
description: "AVIF quality level"
required: false
default: "75"
ignorePaths:
description: "Paths to ignore during search"
required: false
default: "node_modules/**"
compressOnly:
description: "Images will be compressed. No commit, or comments will be added to your Pull Request"
required: false
default: "false"
minPctChange:
description: "Minimun percentage reduction to be committed"
required: false
default: "5"
outputs:
markdown:
description: "Output param used to store the Markdown summary for subsequent actions to use"
runs:
using: "docker"
image: "Dockerfile"
branding:
icon: "image"
color: "green"
Action ID: marketplace/azure/postgresql
Author: Unknown
Publisher: azure
Repository: github.com/azure/postgresql
Deploy to Azure PostgreSQL database using PL/SQL script files
| Name | Required | Description |
|---|---|---|
server-name |
Required | Server name of Azure DB for PostgreSQL. Example: fabrikam.postgres.database.azure.com |
connection-string |
Required | The connection string, including authentication information, for the Azure PostgreSQL Server. Please provide psql connection string. Example: psql "host={host} port={port} dbname={your_database} user={user} password={your_password} sslmode=require" |
plsql-file |
Required | Path to PL/SQL script file to deploy. To specify multiple files, use the regex syntax : *.sql or folder/x/<regex>.sql |
arguments |
Optional | Additional options supported by postgresql simple SQL shell. These options will be applied when executing the given file on the Azure DB for Postgresql. In case of multiple files, the same args will be applied for all files |
# postgresql action
name: 'Azure PostgreSQL Action'
description: 'Deploy to Azure PostgreSQL database using PL/SQL script files'
inputs:
server-name:
description: 'Server name of Azure DB for PostgreSQL. Example: fabrikam.postgres.database.azure.com'
required: true
connection-string:
description: 'The connection string, including authentication information, for the Azure PostgreSQL Server. Please provide psql connection string. Example: psql "host={host} port={port} dbname={your_database} user={user} password={your_password} sslmode=require"'
required: true
plsql-file:
description: 'Path to PL/SQL script file to deploy. To specify multiple files, use the regex syntax : *.sql or folder/x/<regex>.sql'
required: true
arguments:
description: 'Additional options supported by postgresql simple SQL shell. These options will be applied when executing the given file on the Azure DB for Postgresql. In case of multiple files, the same args will be applied for all files'
required: false
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/primer/publish
Author: Unknown
Publisher: primer
Repository: github.com/primer/publish
Publish Primer projects to npm with GitHub Design Systems conventions
| Name | Required | Description |
|---|---|---|
default_branch |
Optional | Branch that releases should be cut from (usually your default branch) |
dir |
Optional | directory to find package.json in |
dry_run |
Optional | run action without publishing |
npm_args |
Optional | publish options & additional npm cli arguments |
release_tag |
Optional | Override tag to release package with Default: latest |
name: '@primer/publish'
description: 'Publish Primer projects to npm with GitHub Design Systems conventions'
inputs:
default_branch:
description: 'Branch that releases should be cut from (usually your default branch)'
required: false
dir:
description: "directory to find package.json in"
required: false
dry_run:
description: "run action without publishing"
required: false
npm_args:
description: "publish options & additional npm cli arguments"
required: false
release_tag:
description: 'Override tag to release package with'
required: false
default: 'latest'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{inputs.npm_args}}
Action ID: marketplace/szenius/notion-update-page
Author: Unknown
Publisher: szenius
Repository: github.com/szenius/notion-update-page
Update property of Notion page on commit
| Name | Required | Description |
|---|---|---|
gh-username |
Required | GitHub username of user who has access to the repository |
gh-token |
Required | GitHub access token of user who has access to the repository |
notion-key |
Required | Notion Integration Secret Key |
notion-property-name |
Required | Notion Page property to be updated |
notion-update-value |
Required | New value for Notion page property |
existing-value |
Optional | What to do with existing value in field to be updated Default: overwrite |
notion-property-type |
Optional | Type of Notion Page property Default: rich_text |
name: "Notion Update Page"
description: "Update property of Notion page on commit"
branding:
icon: book-open
color: white
inputs:
gh-username:
description: "GitHub username of user who has access to the repository"
required: true
gh-token:
description: "GitHub access token of user who has access to the repository"
required: true
notion-key:
description: "Notion Integration Secret Key"
required: true
notion-property-name:
description: "Notion Page property to be updated"
required: true
notion-update-value:
description: "New value for Notion page property"
required: true
existing-value:
description: "What to do with existing value in field to be updated"
required: false
default: "overwrite"
notion-property-type:
description: "Type of Notion Page property"
required: false
default: "rich_text"
runs:
using: "node16"
main: "dist/index.js"
Action ID: marketplace/dflook/terraform-test
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-test
Execute automated tests for a Terraform module
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform module under test Default: . |
test_directory |
Optional | The directory within the module path that contains the test files. |
test_filter |
Optional | The test files to run, one per line. If not specified, all test files in the `test_directory` will be run. The are paths relative to the module path. |
variables |
Optional | Variables to set for the tests. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
junit-xml-path |
A test report in JUnit XML format. The path is relative to the Actions workspace. This will only be available when using Terraform 1.11.0 or later. |
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `no-tests` - No tests were found to run. - `tests-failed` - One or more tests failed. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run steps. |
name: terraform-test
description: Execute automated tests for a Terraform module
author: Daniel Flook
inputs:
path:
description: The path to the Terraform module under test
required: false
default: "."
test_directory:
description: The directory within the module path that contains the test files.
required: false
default: ""
test_filter:
description: |
The test files to run, one per line.
If not specified, all test files in the `test_directory` will be run.
The are paths relative to the module path.
required: false
default: ""
variables:
description: |
Variables to set for the tests. This should be valid Terraform syntax - like a [variable definition file](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
outputs:
junit-xml-path:
description: |
A test report in JUnit XML format.
The path is relative to the Actions workspace.
This will only be available when using Terraform 1.11.0 or later.
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `no-tests` - No tests were found to run.
- `tests-failed` - One or more tests failed.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run steps.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/test.sh
branding:
icon: globe
color: purple
Action ID: marketplace/azure/online-experimentation-analysis
Author: Unknown
Publisher: azure
Repository: github.com/azure/online-experimentation-analysis
Download the latest analysis results for an online experiment of an Azure App Configuration feature flag
| Name | Required | Description |
|---|---|---|
subscription-id |
Required | The subscription ID of the Log Analytics workspace. |
resource-group |
Required | The resource group of the Log Analytics workspace. |
log-analytics-workspace |
Required | The name of the Log Analytics workspace. |
app-configuration-feature-flag |
Required | The App Configuration feature flag of the experiment. |
app-configuration-label |
Optional | The App Configuration label. Default to no label. |
metric-category-order |
Optional | The analysis summary displays metrics grouped by their categories. This input specifies the display order of metric categories as a comma separated list. Unspecified categories are appended in alphabetical order, followed by the group of uncategorized metrics. Default: ''. |
lookback-days |
Optional | The number of previous days to search the Log Analytics workspace for the latest analysis results. Default: 30. Default: 30 |
github-actions-summary |
Optional | If the analysis summary should be displayed in the GitHub Action job summary. Default: true. Default: True |
| Name | Description |
|---|---|
analysis-start-time |
The start time of the analysis (ISO 8601 format). |
analysis-end-time |
The end time of the analysis (ISO 8601 format). |
summary-md |
Summary of the analysis results (GitHub Flavored Markdown format). |
name: "Online Experimentation Analysis"
description: "Download the latest analysis results for an online experiment of an Azure App Configuration feature flag"
inputs:
subscription-id:
description: "The subscription ID of the Log Analytics workspace."
required: true
resource-group:
description: "The resource group of the Log Analytics workspace."
required: true
log-analytics-workspace:
description: "The name of the Log Analytics workspace."
required: true
app-configuration-feature-flag:
description: "The App Configuration feature flag of the experiment."
required: true
app-configuration-label:
description: "The App Configuration label. Default to no label."
required: false
metric-category-order:
description: >
The analysis summary displays metrics grouped by their categories. This input specifies the display order
of metric categories as a comma separated list. Unspecified categories are appended in alphabetical order,
followed by the group of uncategorized metrics. Default: ''.
required: false
default: ""
lookback-days:
description: "The number of previous days to search the Log Analytics workspace for the latest analysis results. Default: 30."
required: false
default: 30
github-actions-summary:
description: "If the analysis summary should be displayed in the GitHub Action job summary. Default: true."
required: false
default: true
outputs:
analysis-start-time:
description: "The start time of the analysis (ISO 8601 format)."
value: ${{ steps.analysis.outputs.analysis-start-time }}
analysis-end-time:
description: "The end time of the analysis (ISO 8601 format)."
value: ${{ steps.analysis.outputs.analysis-end-time }}
summary-md:
description: "Summary of the analysis results (GitHub Flavored Markdown format)."
value: ${{ steps.analysis.outputs.summary-md }}
runs:
using: composite
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install Python dependencies
shell: bash
run: |
python -m pip install --upgrade pip
pip install -r ${{ github.action_path }}/requirements.txt
- name: Download analysis results
id: analysis
shell: bash
run: python ${{ github.action_path }}/action.py
env:
SUBSCRIPTION_ID: ${{ inputs.subscription-id }}
RESOURCE_GROUP: ${{ inputs.resource-group }}
LOGANALYTICS_WORKSPACE: ${{ inputs.log-analytics-workspace }}
APPCONFIG_FEATURE_FLAG: ${{ inputs.app-configuration-feature-flag }}
APPCONFIG_LABEL: ${{ inputs.app-configuration-label }}
METRIC_CATEGORY_ORDER: ${{ inputs.metric-category-order }}
LOOKBACK_DAYS: ${{ inputs.lookback-days }}
GHA_SUMMARY: ${{ inputs.github-actions-summary }}
Action ID: marketplace/zhongfly/setup-ccache-action
Author: Chocobo1 <https://github.com/Chocobo1>
Publisher: zhongfly
Repository: github.com/zhongfly/setup-ccache-action
Setup ccache easily in your workflow, with all the tuning knobs you need!
| Name | Required | Description |
|---|---|---|
update_packager_index |
Optional | By default, this action will update packager's indexes to avoid installation issues (`apt`/`brew` on linux/macOS respectively). You can disable it to save some time however you are then responsible for ensuring the packager's indexes are up-to-date *before* using this action Default: true |
install_ccache |
Optional | By default, this action will install ccache with package manager. You can omit installation if you've already installed ccache and it is accessible in the shell Default: true |
prepend_symlinks_to_path |
Optional | By default, this action will prepend ccache's compiler symlinks directory to PATH so that compiler invocations will be handled by ccache transparently. https://ccache.dev/manual/latest.html#_run_modes Default: true |
windows_compile_environment |
Optional | Specify which compiler environment you are going to use on Windows image. Note that this field (a single value) is mandatory if you use this action on a Windows image. Available options are: ['msvc', 'msys2'] |
api_token |
Optional | Token for using GitHub API Default: ${{ github.token }} |
restore_cache |
Optional | Whether to restore the cache at the start of this action Default: true |
store_cache |
Optional | Whether to store the cache at the end of job execution Default: true |
remove_stale_cache |
Optional | Whether to remove previous/stale cache entries after store cache completed. This requires parameter `api_token` to be valid. This requires `actions: write` permission. Note that for GitHub Pull Requests, only `actions: read` permission will be given so this feature won't work reliably there. Default: true |
override_cache_key |
Optional | Override cache key which is used for storing/retrieving the cache. Accept a string. Leave it empty to use the default value |
override_cache_key_fallback |
Optional | Override additional cache keys for retrieving the cache. Accept a list of strings. Leave it empty to use the default value |
ccache_options |
Optional | Config settings for ccache. Accept a list of key=value pairs. Ref: https://ccache.dev/manual/latest.html#_configuration_options |
| Name | Description |
|---|---|
cache_hit |
This variable will be set to `true` when there is an cache hit, otherwise `false` |
name: "Setup ccache action"
description: "Setup ccache easily in your workflow, with all the tuning knobs you need! "
author: "Chocobo1 <https://github.com/Chocobo1>"
inputs:
# setup related
update_packager_index:
description: "By default, this action will update packager's indexes to avoid installation issues
(`apt`/`brew` on linux/macOS respectively). You can disable it to save some time however you are then
responsible for ensuring the packager's indexes are up-to-date *before* using this action"
default: "true"
required: false
install_ccache:
description: "By default, this action will install ccache with package manager.
You can omit installation if you've already installed ccache and it is accessible in the shell"
default: "true"
required: false
prepend_symlinks_to_path:
description: "By default, this action will prepend ccache's compiler symlinks directory to PATH so that
compiler invocations will be handled by ccache transparently.
https://ccache.dev/manual/latest.html#_run_modes"
default: "true"
required: false
windows_compile_environment:
description: "Specify which compiler environment you are going to use on Windows image.
Note that this field (a single value) is mandatory if you use this action on a Windows image.
Available options are: ['msvc', 'msys2']"
default: ""
required: false
api_token:
description: "Token for using GitHub API"
default: ${{ github.token }}
required: false
# store, restore cache
restore_cache:
description: "Whether to restore the cache at the start of this action"
default: "true"
required: false
store_cache:
description: "Whether to store the cache at the end of job execution"
default: "true"
required: false
remove_stale_cache:
description: "Whether to remove previous/stale cache entries after store cache completed.
This requires parameter `api_token` to be valid.
This requires `actions: write` permission.
Note that for GitHub Pull Requests, only `actions: read` permission will be given so this feature won't work reliably there."
default: "true"
required: false
# cache key related
override_cache_key:
description: "Override cache key which is used for storing/retrieving the cache. Accept a string.
Leave it empty to use the default value"
default: ""
required: false
override_cache_key_fallback:
description: "Override additional cache keys for retrieving the cache. Accept a list of strings.
Leave it empty to use the default value"
default: |
required: false
# ccache specific
ccache_options:
description: "Config settings for ccache. Accept a list of key=value pairs.
Ref: https://ccache.dev/manual/latest.html#_configuration_options"
default: |
required: false
outputs:
cache_hit:
description: "This variable will be set to `true` when there is an cache hit, otherwise `false`"
runs:
using: "node20"
main: "dist/main/index.js"
post: "dist/post/index.js"
branding:
icon: "package"
color: "blue"
Action ID: marketplace/dflook/terraform-output
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-output
Retrieve the root-level outputs from a Terraform configuration.
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform root module directory. Default: . |
workspace |
Optional | Terraform workspace to get outputs from Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
json_output_path |
This is the path to all the root module outputs in a JSON file. The path is relative to the Actions workspace. For example, with the Terraform config: ```hcl output "service_hostname" { value = "example.com" } ``` The file pointed to by this output will contain: ```json { "service_hostname": "example.com" } ``` Terraform list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object. |
name: terraform-output
description: Retrieve the root-level outputs from a Terraform configuration.
author: Daniel Flook
inputs:
path:
description: The path to the Terraform root module directory.
required: false
default: "."
workspace:
description: Terraform workspace to get outputs from
required: false
default: "default"
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
outputs:
json_output_path:
description: |
This is the path to all the root module outputs in a JSON file.
The path is relative to the Actions workspace.
For example, with the Terraform config:
```hcl
output "service_hostname" {
value = "example.com"
}
```
The file pointed to by this output will contain:
```json
{
"service_hostname": "example.com"
}
```
Terraform list, set and tuple types are cast to a JSON array, map and object types are cast to a JSON object.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/output.sh
branding:
icon: globe
color: purple
Action ID: marketplace/mheap/phpunit-matcher-action
Author: Michael Heap
Publisher: mheap
Repository: github.com/mheap/phpunit-matcher-action
Adds problem matcher for PHPUnit
| Name | Required | Description |
|---|---|---|
base_path |
Optional | The base path to strip from your test output to leave just relative paths |
name: "PHPUnit Problem Matchers"
author: "Michael Heap"
description: "Adds problem matcher for PHPUnit"
runs:
using: "node20"
main: "index.js"
branding:
icon: "list"
color: "green"
inputs:
base_path:
description: "The base path to strip from your test output to leave just relative paths"
required: false
Action ID: marketplace/dkamm/ts-action
Author: Your name or organization here
Publisher: dkamm
Repository: github.com/dkamm/ts-action
Provide a description here
| Name | Required | Description |
|---|---|---|
milliseconds |
Required | Your input description here Default: 1000 |
| Name | Description |
|---|---|
time |
Your output description here |
name: The name of your action here
description: Provide a description here
author: Your name or organization here
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: heart
color: red
# Define your inputs here.
inputs:
milliseconds:
description: Your input description here
required: true
default: '1000'
# Define your outputs here.
outputs:
time:
description: Your output description here
runs:
using: node20
main: dist/index.js
Action ID: marketplace/wearerequired/git-mirror-action
Author: Unknown
Publisher: wearerequired
Repository: github.com/wearerequired/git-mirror-action
Action for mirroring a repository in another location (Bitbucket, GitHub, GitLab, …) using SSH.
| Name | Required | Description |
|---|---|---|
source-repo |
Required | SSH URL of the source repo. |
destination-repo |
Required | SSH URL of the destination repo. |
dry-run |
Optional | Execute a dry run. Default: false |
name: 'Mirror a repository using SSH'
description: 'Action for mirroring a repository in another location (Bitbucket, GitHub, GitLab, …) using SSH.'
branding:
icon: 'copy'
color: 'orange'
inputs:
source-repo:
description: 'SSH URL of the source repo.'
required: true
default: ''
destination-repo:
description: 'SSH URL of the destination repo.'
required: true
default: ''
dry-run:
description: 'Execute a dry run.'
required: false
default: 'false'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.source-repo }}
- ${{ inputs.destination-repo }}
- ${{ inputs.dry-run }}
Action ID: marketplace/wearerequired/lint-action
Author: Samuel Meuli
Publisher: wearerequired
Repository: github.com/wearerequired/lint-action
GitHub Action for detecting and fixing linting errors
| Name | Required | Description |
|---|---|---|
github_token |
Optional | The GitHub token used to authenticated with GitHub. Default: ${{ github.token }} |
continue_on_error |
Optional | Whether the workflow run should also fail when linter failures are detected Default: true |
auto_fix |
Optional | Whether linters should try to fix code style issues automatically Default: false |
commit |
Optional | Whether to commit and push the changes made by auto_fix Default: true |
git_no_verify |
Optional | Bypass the pre-commit and pre-push git hooks Default: false |
git_name |
Optional | Username for auto-fix commits Default: Lint Action |
git_email |
Optional | Email address for auto-fix commits Default: lint-action@samuelmeuli.com |
commit_message |
Optional | Template for auto-fix commit messages. The "${linter}" variable can be used to insert the name of the linter which has created the auto-fix Default: Fix code style issues with ${linter} |
check_name |
Optional | Template for the name of the check run. The "${linter}" and "${dir}" variables can be used to insert the name and directory of the linter. Default: ${linter} |
neutral_check_on_warning |
Optional | Whether the check run should conclude with a neutral status instead of success when the linter finds only warnings Default: false |
stylelint |
Optional | Enable or disable stylelint checks Default: false |
stylelint_args |
Optional | Additional arguments to pass to the linter |
stylelint_dir |
Optional | Directory where the stylelint command should be run |
stylelint_extensions |
Optional | Extensions of files to check with stylelint Default: css |
stylelint_command_prefix |
Optional | Shell command to prepend to the linter command |
stylelint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
gofmt |
Optional | Enable or disable gofmt checks Default: false |
gofmt_args |
Optional | Additional arguments to pass to the linter |
gofmt_dir |
Optional | Directory where the gofmt command should be run |
gofmt_extensions |
Optional | Extensions of files to check with gofmt Default: go |
gofmt_command_prefix |
Optional | Shell command to prepend to the linter command |
gofmt_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
golint |
Optional | Enable or disable golint checks Default: false |
golint_args |
Optional | Additional arguments to pass to the linter |
golint_dir |
Optional | Directory where the golint command should be run |
golint_extensions |
Optional | Extensions of files to check with golint Default: go |
golint_command_prefix |
Optional | Shell command to prepend to the linter command |
golint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
eslint |
Optional | Enable or disable ESLint checks Default: false |
eslint_args |
Optional | Additional arguments to pass to the linter |
eslint_dir |
Optional | Directory where the ESLint command should be run |
eslint_extensions |
Optional | Extensions of files to check with ESLint Default: js |
eslint_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
eslint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
prettier |
Optional | Enable or disable Prettier checks Default: false |
prettier_args |
Optional | Additional arguments to pass to the linter |
prettier_dir |
Optional | Directory where the Prettier command should be run |
prettier_extensions |
Optional | Extensions of files to check with Prettier Default: css,html,js,json,jsx,md,sass,scss,ts,tsx,vue,yaml,yml |
prettier_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
prettier_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
xo |
Optional | Enable or disable XO checks Default: false |
xo_args |
Optional | Additional arguments to pass to the linter |
xo_dir |
Optional | Directory where the XO command should be run |
xo_extensions |
Optional | Extensions of files to check with XO Default: js |
xo_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
xo_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
tsc |
Optional | Enable or disable TypeScript checks Default: false |
tsc_args |
Optional | Additional arguments to pass to the linter |
tsc_dir |
Optional | Directory where the TSC command should be run |
tsc_extensions |
Optional | Extensions of files to check with TSC Default: ts |
tsc_command_prefix |
Optional | Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn. |
tsc_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
php_codesniffer |
Optional | Enable or disable PHP_CodeSniffer checks Default: false |
php_codesniffer_args |
Optional | Additional arguments to pass to the linter |
php_codesniffer_dir |
Optional | Directory where the PHP_CodeSniffer command should be run |
php_codesniffer_extensions |
Optional | Extensions of files to check with PHP_CodeSniffer Default: php |
php_codesniffer_command_prefix |
Optional | Shell command to prepend to the linter command |
php_codesniffer_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
autopep8 |
Optional | Enable or disable autopep8 checks Default: false |
autopep8_args |
Optional | Additional arguments to pass to the linter |
autopep8_dir |
Optional | Directory where the autopep8 command should be run |
autopep8_extensions |
Optional | Extensions of files to check with autopep8 Default: py |
autopep8_command_prefix |
Optional | Shell command to prepend to the linter command |
autopep8_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
black |
Optional | Enable or disable Black checks Default: false |
black_args |
Optional | Additional arguments to pass to the linter |
black_dir |
Optional | Directory where the Black command should be run |
black_extensions |
Optional | Extensions of files to check with Black Default: py |
black_command_prefix |
Optional | Shell command to prepend to the linter command |
black_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
clang_format |
Optional | Enable or disable ClangFormat checks Default: false |
clang_format_args |
Optional | Additional arguments to pass to the linter |
clang_format_dir |
Optional | Directory where the ClangFormat command should be run |
clang_format_extensions |
Optional | Extensions of files to check with ClangFormat Default: c,cc,cpp,h,hpp,m,mm |
clang_format_command_prefix |
Optional | Shell command to prepend to the linter command |
clang_format_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
flake8 |
Optional | Enable or disable Flake8 checks Default: false |
flake8_args |
Optional | Additional arguments to pass to the linter |
flake8_dir |
Optional | Directory where the Flake8 command should be run |
flake8_extensions |
Optional | Extensions of files to check with Flake8 Default: py |
flake8_command_prefix |
Optional | Shell command to prepend to the linter command |
flake8_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
mypy |
Optional | Enable or disable Mypy checks Default: false |
mypy_args |
Optional | Additional arguments to pass to the linter |
mypy_dir |
Optional | Directory where the Mypy command should be run |
mypy_extensions |
Optional | Extensions of files to check with Mypy Default: py |
mypy_command_prefix |
Optional | Shell command to prepend to the linter command |
mypy_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
oitnb |
Optional | Enable or disable oitnb checks Default: false |
oitnb_args |
Optional | Additional arguments to pass to the linter |
oitnb_dir |
Optional | Directory where the oitnb command should be run |
oitnb_extensions |
Optional | Extensions of files to check with oitnb Default: py |
oitnb_command_prefix |
Optional | Shell command to prepend to the linter command |
oitnb_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
pylint |
Optional | Enable or disable Pylint checks Default: false |
pylint_args |
Optional | Additional arguments to pass to the linter |
pylint_dir |
Optional | Directory where the Pylint command should be run |
pylint_extensions |
Optional | Extensions of files to check with Pylint Default: py |
pylint_command_prefix |
Optional | Shell command to prepend to the linter command |
pylint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
rubocop |
Optional | Enable or disable RuboCop checks Default: false |
rubocop_args |
Optional | Additional arguments to pass to the linter |
rubocop_dir |
Optional | Directory where the RuboCop command should be run |
rubocop_extensions |
Optional | Extensions of files to check with RuboCop Default: rb |
rubocop_command_prefix |
Optional | Shell command to prepend to the linter command |
rubocop_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
erblint |
Optional | Enable or disable ERB Lint checks Default: false |
erblint_args |
Optional | Additional arguments to pass to the linter |
erblint_dir |
Optional | Directory where the ERB Lint command should be run |
erblint_extensions |
Optional | Extensions of files to check with ERB Lint Default: erb |
erblint_command_prefix |
Optional | Shell command to prepend to the linter command |
erblint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: false |
clippy |
Optional | Enable or disable clippy Default: false |
clippy_args |
Optional | Additional arguments to pass to the linter |
clippy_dir |
Optional | Directory where the RuboCop command should be run |
clippy_extensions |
Optional | Extensions of files to check with RuboCop Default: rs |
clippy_command_prefix |
Optional | Shell command to prepend to the linter command |
clippy_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
rustfmt |
Optional | Enable or disable rustfmt Default: false |
rustfmt_args |
Optional | Additional arguments to pass to the linter Default: -- --color=never |
rustfmt_extensions |
Optional | Extensions of files to check with rustfmt Default: rs |
rustfmt_dir |
Optional | Directory where the rustfmt command should be run |
rustfmt_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swiftformat |
Optional | Enable or disable SwiftFormat checks Default: false |
swiftformat_args |
Optional | Additional arguments to pass to the linter |
swiftformat_dir |
Optional | Directory where the SwiftFormat command should be run |
swiftformat_extensions |
Optional | Extensions of files to check with SwiftFormat Default: swift |
swiftformat_command_prefix |
Optional | Shell command to prepend to the linter command |
swiftformat_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swift_format_lockwood |
Optional | Enable or disable SwiftFormat checks Default: false |
swift_format_lockwood_args |
Optional | Additional arguments to pass to the linter |
swift_format_lockwood_dir |
Optional | Directory where the SwiftFormat command should be run |
swift_format_lockwood_extensions |
Optional | Extensions of files to check with SwiftFormat Default: swift |
swift_format_lockwood_command_prefix |
Optional | Shell command to prepend to the linter command |
swift_format_lockwood_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swift_format_official |
Optional | Enable or disable swift-format checks Default: false |
swift_format_official_args |
Optional | Additional arguments to pass to the linter |
swift_format_official_dir |
Optional | Directory where the swift-format command should be run |
swift_format_official_extensions |
Optional | Extrensions of files to check with swift-format Default: swift |
swift_format_official_command_prefix |
Optional | Shell command to prepend to the linter command |
swift_format_official_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
swiftlint |
Optional | Enable or disable SwiftLint checks Default: false |
swiftlint_args |
Optional | Additional arguments to pass to the linter |
swiftlint_dir |
Optional | Directory where the SwiftLint command should be run |
swiftlint_extensions |
Optional | Extensions of files to check with SwiftLint Default: swift |
swiftlint_command_prefix |
Optional | Shell command to prepend to the linter command |
swiftlint_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
dotnet_format |
Optional | Enable or disable dotnet-format checks Default: false |
dotnet_format_args |
Optional | Additional arguments to pass to the linter |
dotnet_format_dir |
Optional | Directory where the dotnet-format command should be run |
dotnet_format_extensions |
Optional | Extensions of files to check with dotnet-format Default: cs |
dotnet_format_command_prefix |
Optional | Shell command to prepend to the linter command |
dotnet_format_auto_fix |
Optional | Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well Default: true |
name: Lint Action
author: Samuel Meuli
description: GitHub Action for detecting and fixing linting errors
inputs:
github_token:
description: The GitHub token used to authenticated with GitHub.
required: false
default: ${{ github.token }}
continue_on_error:
description: Whether the workflow run should also fail when linter failures are detected
required: false
default: "true"
auto_fix:
description: Whether linters should try to fix code style issues automatically
required: false
default: "false"
commit:
description: Whether to commit and push the changes made by auto_fix
required: false
default: "true"
git_no_verify:
description: Bypass the pre-commit and pre-push git hooks
required: false
default: "false"
git_name:
description: Username for auto-fix commits
required: false
default: Lint Action
git_email:
description: Email address for auto-fix commits
required: false
default: "lint-action@samuelmeuli.com"
commit_message:
description: 'Template for auto-fix commit messages. The "${linter}" variable can be used to insert the name of the linter which has created the auto-fix'
required: false
default: "Fix code style issues with ${linter}"
check_name:
description: 'Template for the name of the check run. The "${linter}" and "${dir}" variables can be used to insert the name and directory of the linter.'
required: false
default: "${linter}"
neutral_check_on_warning:
description: Whether the check run should conclude with a neutral status instead of success when the linter finds only warnings
required: false
default: "false"
# CSS
stylelint:
description: Enable or disable stylelint checks
required: false
default: "false"
stylelint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
stylelint_dir:
description: Directory where the stylelint command should be run
required: false
stylelint_extensions:
description: Extensions of files to check with stylelint
required: false
default: "css"
stylelint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
stylelint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
# Go
gofmt:
description: Enable or disable gofmt checks
required: false
default: "false"
gofmt_args:
description: Additional arguments to pass to the linter
required: false
default: ""
gofmt_dir:
description: Directory where the gofmt command should be run
required: false
gofmt_extensions:
description: Extensions of files to check with gofmt
required: false
default: "go"
gofmt_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
gofmt_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
golint:
description: Enable or disable golint checks
required: false
default: "false"
golint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
golint_dir:
description: Directory where the golint command should be run
required: false
golint_extensions:
description: Extensions of files to check with golint
required: false
default: "go"
golint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
golint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# JavaScript
eslint:
description: Enable or disable ESLint checks
required: false
default: "false"
eslint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
eslint_dir:
description: Directory where the ESLint command should be run
required: false
eslint_extensions:
description: Extensions of files to check with ESLint
required: false
default: "js"
eslint_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
eslint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
prettier:
description: Enable or disable Prettier checks
required: false
default: "false"
prettier_args:
description: Additional arguments to pass to the linter
required: false
default: ""
prettier_dir:
description: Directory where the Prettier command should be run
required: false
prettier_extensions:
description: Extensions of files to check with Prettier
required: false
default: "css,html,js,json,jsx,md,sass,scss,ts,tsx,vue,yaml,yml"
prettier_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
prettier_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
xo:
description: Enable or disable XO checks
required: false
default: "false"
xo_args:
description: Additional arguments to pass to the linter
required: false
default: ""
xo_dir:
description: Directory where the XO command should be run
required: false
xo_extensions:
description: Extensions of files to check with XO
required: false
default: "js"
xo_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
xo_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
# TypeScript
tsc:
description: Enable or disable TypeScript checks
required: false
default: "false"
tsc_args:
description: Additional arguments to pass to the linter
required: false
default: ""
tsc_dir:
description: Directory where the TSC command should be run
required: false
tsc_extensions:
description: Extensions of files to check with TSC
required: false
default: "ts"
tsc_command_prefix:
description: Shell command to prepend to the linter command. Will default to `npx --no-install` for NPM and `yarn run --silent` for Yarn.
required: false
default: ""
tsc_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# PHP
php_codesniffer:
description: Enable or disable PHP_CodeSniffer checks
required: false
default: "false"
php_codesniffer_args:
description: Additional arguments to pass to the linter
required: false
default: ""
php_codesniffer_dir:
description: Directory where the PHP_CodeSniffer command should be run
required: false
php_codesniffer_extensions:
description: Extensions of files to check with PHP_CodeSniffer
required: false
default: "php"
php_codesniffer_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
php_codesniffer_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# Python
autopep8:
description: Enable or disable autopep8 checks
required: false
default: "false"
autopep8_args:
description: Additional arguments to pass to the linter
required: false
default: ""
autopep8_dir:
description: Directory where the autopep8 command should be run
required: false
autopep8_extensions:
description: Extensions of files to check with autopep8
required: false
default: "py"
autopep8_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
autopep8_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
black:
description: Enable or disable Black checks
required: false
default: "false"
black_args:
description: Additional arguments to pass to the linter
required: false
default: ""
black_dir:
description: Directory where the Black command should be run
required: false
black_extensions:
description: Extensions of files to check with Black
required: false
default: "py"
black_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
black_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
clang_format:
description: Enable or disable ClangFormat checks
required: false
default: "false"
clang_format_args:
description: Additional arguments to pass to the linter
required: false
default: ""
clang_format_dir:
description: Directory where the ClangFormat command should be run
required: false
clang_format_extensions:
description: Extensions of files to check with ClangFormat
required: false
default: "c,cc,cpp,h,hpp,m,mm"
clang_format_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
clang_format_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
flake8:
description: Enable or disable Flake8 checks
required: false
default: "false"
flake8_args:
description: Additional arguments to pass to the linter
required: false
default: ""
flake8_dir:
description: Directory where the Flake8 command should be run
required: false
flake8_extensions:
description: Extensions of files to check with Flake8
required: false
default: "py"
flake8_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
flake8_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
mypy:
description: Enable or disable Mypy checks
required: false
default: "false"
mypy_args:
description: Additional arguments to pass to the linter
required: false
default: ""
mypy_dir:
description: Directory where the Mypy command should be run
required: false
mypy_extensions:
description: Extensions of files to check with Mypy
required: false
default: "py"
mypy_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
mypy_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
oitnb:
description: Enable or disable oitnb checks
required: false
default: "false"
oitnb_args:
description: Additional arguments to pass to the linter
required: false
default: ""
oitnb_dir:
description: Directory where the oitnb command should be run
required: false
oitnb_extensions:
description: Extensions of files to check with oitnb
required: false
default: "py"
oitnb_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
oitnb_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
pylint:
description: Enable or disable Pylint checks
required: false
default: "false"
pylint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
pylint_dir:
description: Directory where the Pylint command should be run
required: false
pylint_extensions:
description: Extensions of files to check with Pylint
required: false
default: "py"
pylint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
pylint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# Ruby
rubocop:
description: Enable or disable RuboCop checks
required: false
default: "false"
rubocop_args:
description: Additional arguments to pass to the linter
required: false
default: ""
rubocop_dir:
description: Directory where the RuboCop command should be run
required: false
rubocop_extensions:
description: Extensions of files to check with RuboCop
required: false
default: "rb"
rubocop_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
rubocop_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
erblint:
description: Enable or disable ERB Lint checks
required: false
default: "false"
erblint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
erblint_dir:
description: Directory where the ERB Lint command should be run
required: false
erblint_extensions:
description: Extensions of files to check with ERB Lint
required: false
default: "erb"
erblint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
erblint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "false"
# Rust
clippy:
description: Enable or disable clippy
required: false
default: "false"
clippy_args:
description: Additional arguments to pass to the linter
required: false
default: ""
clippy_dir:
description: Directory where the RuboCop command should be run
required: false
clippy_extensions:
description: Extensions of files to check with RuboCop
required: false
default: "rs"
clippy_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
clippy_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
rustfmt:
description: Enable or disable rustfmt
required: false
default: "false"
rustfmt_args:
description: Additional arguments to pass to the linter
required: false
default: "-- --color=never"
rustfmt_extensions:
description: Extensions of files to check with rustfmt
required: false
default: "rs"
rustfmt_dir:
description: Directory where the rustfmt command should be run
required: false
rustfmt_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
# Swift
# Alias of `swift_format_lockwood` (for backward compatibility)
# TODO: Remove alias in v2
swiftformat:
description: Enable or disable SwiftFormat checks
required: false
default: "false"
swiftformat_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swiftformat_dir:
description: Directory where the SwiftFormat command should be run
required: false
swiftformat_extensions:
description: Extensions of files to check with SwiftFormat
required: false
default: "swift"
swiftformat_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swiftformat_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
swift_format_lockwood:
description: Enable or disable SwiftFormat checks
required: false
default: "false"
swift_format_lockwood_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swift_format_lockwood_dir:
description: Directory where the SwiftFormat command should be run
required: false
swift_format_lockwood_extensions:
description: Extensions of files to check with SwiftFormat
required: false
default: "swift"
swift_format_lockwood_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swift_format_lockwood_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
swift_format_official:
description: Enable or disable swift-format checks
required: false
default: "false"
swift_format_official_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swift_format_official_dir:
description: Directory where the swift-format command should be run
required: false
swift_format_official_extensions:
description: Extrensions of files to check with swift-format
required: false
default: "swift"
swift_format_official_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swift_format_official_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
swiftlint:
description: Enable or disable SwiftLint checks
required: false
default: "false"
swiftlint_args:
description: Additional arguments to pass to the linter
required: false
default: ""
swiftlint_dir:
description: Directory where the SwiftLint command should be run
required: false
swiftlint_extensions:
description: Extensions of files to check with SwiftLint
required: false
default: "swift"
swiftlint_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
swiftlint_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
dotnet_format:
description: Enable or disable dotnet-format checks
required: false
default: "false"
dotnet_format_args:
description: Additional arguments to pass to the linter
required: false
default: ""
dotnet_format_dir:
description: Directory where the dotnet-format command should be run
required: false
dotnet_format_extensions:
description: Extensions of files to check with dotnet-format
required: false
default: "cs"
dotnet_format_command_prefix:
description: Shell command to prepend to the linter command
required: false
default: ""
dotnet_format_auto_fix:
description: Whether this linter should try to fix code style issues automatically. If set to `true`, it will only work if "auto_fix" is set to `true` as well
required: false
default: "true"
runs:
using: node20
main: ./dist/index.js
branding:
icon: check
color: green
Action ID: marketplace/mheap/frontmatter-json-schema-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/frontmatter-json-schema-action
Validate YAML frontmatter against a JSON schema
| Name | Required | Description |
|---|---|---|
paths |
Required | The paths to read frontmatter from (e.g. `**/*.md`) |
schema |
Optional | The JSON schema to use when validating |
schema_path |
Optional | The path to a JSON schema to use when validation. This will override `schema` |
name: Frontmatter JSON Schema Validator
description: Validate YAML frontmatter against a JSON schema
inputs:
paths:
description: The paths to read frontmatter from (e.g. `**/*.md`)
required: true
schema:
description: The JSON schema to use when validating
required: false
schema_path:
description: The path to a JSON schema to use when validation. This will override `schema`
required: false
branding:
icon: eye
color: orange
runs:
using: docker
image: "Dockerfile"
Action ID: marketplace/dflook/tofu-check
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-check
Check if there are OpenTofu changes to apply
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the OpenTofu root module to check Default: . |
workspace |
Optional | OpenTofu workspace to run the plan in Default: default |
variables |
Optional | Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure` because the there are outstanding changes to apply, this will be set to 'changes-to-apply'. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a step when there are changes to apply. |
name: tofu-check
description: Check if there are OpenTofu changes to apply
author: Daniel Flook
inputs:
path:
description: Path to the OpenTofu root module to check
required: false
default: "."
workspace:
description: OpenTofu workspace to run the plan in
required: false
default: "default"
variables:
description: |
Variables to set for the tofu plan. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure` because the there are outstanding changes to apply, this will be set to 'changes-to-apply'.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a step when there are changes to apply.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/check.sh
branding:
icon: globe
color: purple
Action ID: marketplace/TryGhost/mysql-action
Author: Mirro Mutth
Publisher: TryGhost
Repository: github.com/TryGhost/mysql-action
Setup a MySQL database
| Name | Required | Description |
|---|---|---|
host port |
Optional | The port of host Default: 3306 |
container port |
Optional | The port of container Default: 3306 |
character set server |
Optional | --character-set-server - The character set of MySQL server Default: utf8mb4 |
collation server |
Optional | --collation-server - The character collation of MySQL server Default: utf8mb4_0900_ai_ci |
mysql version |
Optional | Version of MySQL to use Default: latest |
mysql root password |
Optional | MYSQL_ROOT_PASSWORD - root superuser password |
mysql database |
Optional | MYSQL_DATABASE - name for the default database that is created |
mysql user |
Optional | MYSQL_USER - create the specified user with superuser power for created database |
mysql password |
Optional | MYSQL_PASSWORD - specified superuser password which user is power for created database |
authentication plugin |
Optional | Set authentication plugin. Login/password is used as default. Changes default behaviour of MySQL 8.0+, where `caching_sha2_password` is a default value. Default: mysql_native_password |
use tmpfs |
Optional | Whether or not to use Docker's tmpfs feature for MySQL's storage Default: True |
tmpfs size |
Optional | Desired size of tmpfs volume Default: 3072M |
name: 'Setup MySQL'
description: 'Setup a MySQL database'
author: 'Mirro Mutth'
branding:
icon: 'database'
color: 'orange'
inputs:
host port:
description: 'The port of host'
required: false
default: 3306
container port:
description: 'The port of container'
required: false
default: 3306
character set server:
description: '--character-set-server - The character set of MySQL server'
required: false
default: 'utf8mb4'
collation server:
description: '--collation-server - The character collation of MySQL server'
required: false
default: 'utf8mb4_0900_ai_ci'
mysql version:
description: 'Version of MySQL to use'
required: false
default: 'latest'
mysql root password:
description: 'MYSQL_ROOT_PASSWORD - root superuser password'
required: false
default: ''
mysql database:
description: 'MYSQL_DATABASE - name for the default database that is created'
required: false
default: ''
mysql user:
description: 'MYSQL_USER - create the specified user with superuser power for created database'
required: false
default: ''
mysql password:
description: 'MYSQL_PASSWORD - specified superuser password which user is power for created database'
required: false
default: ''
authentication plugin:
description: 'Set authentication plugin. Login/password is used as default. Changes default behaviour of MySQL 8.0+, where `caching_sha2_password` is a default value.'
required: false
default: 'mysql_native_password'
use tmpfs:
description: "Whether or not to use Docker's tmpfs feature for MySQL's storage"
required: false
default: true
tmpfs size:
description: "Desired size of tmpfs volume"
required: false
default: "3072M"
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/Ilshidur/action-discord
Author: Ilshidur
Publisher: Ilshidur
Repository: github.com/Ilshidur/action-discord
Outputs a message to Discord.
| Name | Required | Description |
|---|---|---|
args |
Required | The message to display in the Discord message. |
name: 'Actions for Discord'
description: 'Outputs a message to Discord.'
author: Ilshidur
branding:
icon: message-square
color: gray-dark
inputs:
args:
description: 'The message to display in the Discord message.'
required: true
# TODO: Handle configuration through Action inputs.
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/github/measure-innersource
Author: github
Publisher: github
Repository: github.com/github/measure-innersource
A GitHub Action to measure and report how much innersource collaboration in a repo
---
name: "measure-innersource"
author: "github"
description: "A GitHub Action to measure and report how much innersource collaboration in a repo"
runs:
using: "docker"
image: "docker://ghcr.io/github/measure_innersource:v1"
branding:
icon: "bar-chart"
color: "white"
Action ID: marketplace/azure/k8s-create-secret
Author: Unknown
Publisher: azure
Repository: github.com/azure/k8s-create-secret
Create a generic secret, docker-registry secret, or TLS secret in a Kubernetes such as Azure Kubernetes Service (AKS) clusters
| Name | Required | Description |
|---|---|---|
namespace |
Optional | Choose the target Kubernetes namespace. If the namespace is not provided, the commands will run in the default namespace. |
secret-type |
Optional | Type of Kubernetes secret. Defaults to 'kubernetes.io/dockerconfigjson'. Default: kubernetes.io/dockerconfigjson |
secret-name |
Required | Name of the secret. You can use this secret name in the Kubernetes YAML configuration file. |
container-registry-url |
Optional | Container Registry URL |
container-registry-username |
Optional | Container Registry user name |
container-registry-password |
Optional | Container Registry password |
container-registry-email |
Optional | Container Registry email (optional even when using url,username,password) |
string-data |
Optional | JSON object with plaintext string data for secret ex: {"key1":"value1"} |
data |
Optional | JSON object with the serialized form of the secret data in a base64 encoded string ex: {"key1":"[base64 encoded data]"} |
tls-cert |
Optional | Base64 encoded TLS certificate (PEM format) |
tls-key |
Optional | Base64 encoded TLS private key (PEM format) |
| Name | Description |
|---|---|
secret-name |
Secret name |
name: 'Create secret in Kubernetes cluster'
description: 'Create a generic secret, docker-registry secret, or TLS secret in a Kubernetes such as Azure Kubernetes Service (AKS) clusters'
inputs:
# Please ensure you have used either azure/k8s-actions/aks-set-context or azure/k8s-actions/k8s-set-context in the workflow before this action
namespace:
description: 'Choose the target Kubernetes namespace. If the namespace is not provided, the commands will run in the default namespace.'
required: false
secret-type:
description: "Type of Kubernetes secret. Defaults to 'kubernetes.io/dockerconfigjson'."
required: false
default: 'kubernetes.io/dockerconfigjson'
secret-name:
description: 'Name of the secret. You can use this secret name in the Kubernetes YAML configuration file.'
required: true
container-registry-url:
description: 'Container Registry URL'
required: false
container-registry-username:
description: 'Container Registry user name'
required: false
container-registry-password:
description: 'Container Registry password'
required: false
container-registry-email:
description: 'Container Registry email (optional even when using url,username,password)'
required: false
string-data:
description: 'JSON object with plaintext string data for secret ex: {"key1":"value1"}'
required: false
data:
description: 'JSON object with the serialized form of the secret data in a base64 encoded string ex: {"key1":"[base64 encoded data]"}'
required: false
tls-cert:
description: 'Base64 encoded TLS certificate (PEM format)'
required: false
tls-key:
description: 'Base64 encoded TLS private key (PEM format)'
required: false
outputs:
secret-name:
description: 'Secret name'
branding:
icon: 'k8s.svg' # vector art to display in the GitHub Marketplace
color: 'blue' # optional, decorates the entry in the GitHub Marketplace
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/azure/data-factory-export-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/data-factory-export-action
Export Azure Data Factory resources to an ARM Template
| Name | Required | Description |
|---|---|---|
path |
Optional | Directory that contains the Data Factory resources Default: ./ |
id |
Optional | Data Factory resource ID Default: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup/providers/Microsoft.DataFactory/factories/dataFactory |
| Name | Description |
|---|---|
arm-template-directory |
Directory where the ARM template will be saved |
name: data-factory-export
description: Export Azure Data Factory resources to an ARM Template
inputs:
path:
description: 'Directory that contains the Data Factory resources'
required: false
default: ./
id:
description: 'Data Factory resource ID'
required: false
default: '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup/providers/Microsoft.DataFactory/factories/dataFactory'
outputs:
arm-template-directory:
description: 'Directory where the ARM template will be saved'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.path }}
- ${{ inputs.id }}
Action ID: marketplace/FranzDiebold/github-env-vars-action
Author: Franz Diebold
Publisher: FranzDiebold
Repository: github.com/FranzDiebold/github-env-vars-action
Expose useful Environment Variables.
name: 'GitHub Environment Variables Action'
description: 'Expose useful Environment Variables.'
author: 'Franz Diebold'
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'plus-circle'
color: 'green'
Action ID: marketplace/github/webpack-bundlesize-compare-action
Author: Unknown
Publisher: github
Repository: github.com/github/webpack-bundlesize-compare-action
Compare the bundle size between the base branch and the current branch
| Name | Required | Description |
|---|---|---|
current-stats-json-path |
Required | The path to the current stats.json file |
base-stats-json-path |
Required | The path to the base stats.json file |
github-token |
Required | The Github token |
title |
Optional | An optional addition to the title, which also helps key comments, useful if running more than 1 copy of this action |
describe-assets |
Optional | Optional specification describing asset changes. May be one of the convenience
keywords "all", "none", or "changed-only" (for all except the unchanged section), OR
a string of space-separated section names, e.g. "added bigger unchanged", where
all sections are:
- added
- removed
- bigger
- smaller
- unchanged
If not provided, "all" is used (equivalent to "added removed bigger smaller unchanged")
Default: all |
name: 'Bundle comparison'
description: 'Compare the bundle size between the base branch and the current branch'
inputs:
current-stats-json-path:
description: 'The path to the current stats.json file'
required: true
base-stats-json-path:
description: 'The path to the base stats.json file'
required: true
github-token:
description: 'The Github token'
required: true
title:
description: 'An optional addition to the title, which also helps key comments, useful if running more than 1 copy of this action'
required: false
describe-assets:
description: |
Optional specification describing asset changes. May be one of the convenience
keywords "all", "none", or "changed-only" (for all except the unchanged section), OR
a string of space-separated section names, e.g. "added bigger unchanged", where
all sections are:
- added
- removed
- bigger
- smaller
- unchanged
If not provided, "all" is used (equivalent to "added removed bigger smaller unchanged")
required: false
default: 'all'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/yeslayla/butler-publish-itchio-action
Author: yeslayla
Publisher: yeslayla
Repository: github.com/yeslayla/butler-publish-itchio-action
Publish releases to Itch.io using Butler
name: "Butler Push"
description: "Publish releases to Itch.io using Butler"
author: yeslayla
runs:
using: docker
image: Dockerfile
branding:
icon: upload
color: white
Action ID: marketplace/amirisback/android-app-call-submodule-as-library
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-app-call-submodule-as-library
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/actions/ai-inference
Author: GitHub
Publisher: actions
Repository: github.com/actions/ai-inference
Generate an AI response based on a provided prompt
| Name | Required | Description |
|---|---|---|
prompt |
Optional | The prompt for the model |
prompt-file |
Optional | Path to a file containing the prompt (supports .txt and .prompt.yml formats) |
input |
Optional | Template variables in YAML format for .prompt.yml files |
file_input |
Optional | Template variables in YAML format mapping variable names to file paths. The file contents will be used for templating. |
model |
Optional | The model to use Default: openai/gpt-4o |
endpoint |
Optional | The endpoint to use Default: https://models.github.ai/inference |
system-prompt |
Optional | The system prompt for the model Default: You are a helpful assistant |
system-prompt-file |
Optional | Path to a file containing the system prompt |
max-tokens |
Optional | The maximum number of tokens to generate Default: 200 |
token |
Optional | The token to use Default: ${{ github.token }} |
enable-github-mcp |
Optional | Enable Model Context Protocol integration with GitHub tools Default: false |
github-mcp-token |
Optional | The token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work. |
github-mcp-toolsets |
Optional | Comma-separated list of toolsets to enable for GitHub MCP (e.g., "repos,issues,pull_requests,actions"). Use "all" for all toolsets, "default" for default set. If not specified, uses default toolsets (context,repos,issues,pull_requests,users). |
| Name | Description |
|---|---|
response |
The response from the model |
response-file |
The file path where the response is saved |
name: 'AI Inference'
description: Generate an AI response based on a provided prompt
author: 'GitHub'
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: 'message-square'
color: red
# Define your inputs here.
inputs:
prompt:
description: The prompt for the model
required: false
default: ''
prompt-file:
description: Path to a file containing the prompt (supports .txt and .prompt.yml
formats)
required: false
default: ''
input:
description: Template variables in YAML format for .prompt.yml files
required: false
default: ''
file_input:
description: Template variables in YAML format mapping variable names to file paths. The file contents will be used for templating.
required: false
default: ''
model:
description: The model to use
required: false
default: 'openai/gpt-4o'
endpoint:
description: The endpoint to use
required: false
default: 'https://models.github.ai/inference'
system-prompt:
description: The system prompt for the model
required: false
default: 'You are a helpful assistant'
system-prompt-file:
description: Path to a file containing the system prompt
required: false
default: ''
max-tokens:
description: The maximum number of tokens to generate
required: false
default: '200'
token:
description: The token to use
required: false
default: ${{ github.token }}
enable-github-mcp:
description: Enable Model Context Protocol integration with GitHub tools
required: false
default: 'false'
github-mcp-token:
description: The token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work.
required: false
default: ''
github-mcp-toolsets:
description: 'Comma-separated list of toolsets to enable for GitHub MCP (e.g., "repos,issues,pull_requests,actions"). Use "all" for all toolsets, "default" for default set. If not specified, uses default toolsets (context,repos,issues,pull_requests,users).'
required: false
default: ''
# Define your outputs here.
outputs:
response:
description: The response from the model
response-file:
description: The file path where the response is saved
runs:
using: node24
main: dist/index.js
Action ID: marketplace/MarketingPipeline/GitHub-Downloader-Action
Author: github.com/MarketingPipeline
Publisher: MarketingPipeline
Repository: github.com/MarketingPipeline/GitHub-Downloader-Action
Action description
| Name | Required | Description |
|---|---|---|
filepath |
Optional | File Path To Place Files |
repo |
Required | GitHub repository italia/publiccode-parser-action |
overwrite |
Optional | Overwrite Files In Repo |
name: 'Action name'
description: 'Action description'
author: 'github.com/MarketingPipeline'
inputs:
filepath:
description: 'File Path To Place Files'
default: ''
required: false
repo:
description: 'GitHub repository italia/publiccode-parser-action'
default: ''
required: true
overwrite:
description: 'Overwrite Files In Repo'
default: ''
required: false
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.repo }}
- ${{ inputs.filepath }}
- ${{ inputs.overwrite }}
branding:
icon: 'activity'
color: 'white'
Action ID: marketplace/azure/run-sqlpackage-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/run-sqlpackage-action
Runs SqlPackage command on a target database
| Name | Required | Description |
|---|---|---|
action |
Required | Action parameter to run with SqlPackage. Supported values are: Publish, DeployReport, DriftReport, Script |
sourcepath |
Required | The path where to look for the DACPAC file. If multiple files exists, all of them are processed |
profile |
Required | The profile path to use during the execution. It has to be an xml file |
database-server |
Optional | Database server URL (without protocol). If not indicated in the publishing profile, it has to be indicated here. |
database-name |
Optional | Database name. If not indicated in the publishing profile, it has to be indicated here. |
authtoken |
Optional | The authentication token used to connect to the database, if credentials not indicated in the connection string |
outputpath |
Optional | The output folder where assets will be generated if any Default: . |
outputfile |
Optional | The output file name. The final name of the file will be [dacpac_name].[outputfile] Default: deployreport.xml |
name: run-sqlpackage
description: 'Runs SqlPackage command on a target database'
inputs:
action:
description: 'Action parameter to run with SqlPackage. Supported values are: Publish, DeployReport, DriftReport, Script'
required: true
sourcepath:
description: 'The path where to look for the DACPAC file. If multiple files exists, all of them are processed'
required: true
profile:
description: 'The profile path to use during the execution. It has to be an xml file'
required: true
database-server:
description: 'Database server URL (without protocol). If not indicated in the publishing profile, it has to be indicated here.'
required: false
default: ''
database-name:
description: 'Database name. If not indicated in the publishing profile, it has to be indicated here.'
required: false
default: ''
authtoken:
description: 'The authentication token used to connect to the database, if credentials not indicated in the connection string'
required: false
default: ''
outputpath:
description: 'The output folder where assets will be generated if any'
required: false
default: .
outputfile:
description: 'The output file name. The final name of the file will be [dacpac_name].[outputfile]'
required: false
default: 'deployreport.xml'
runs:
using: "composite"
steps:
- name: Installing SQL Data Tools
shell: bash
run: |
if test -f "/opt/sqlpackage/sqlpackage"; then
echo "::debug::SqlPackage already installed in the context"
else
sudo apt-get install libunwind8
wget -progress=bar:force -q -O sqlpackage.zip \
https://aka.ms/sqlpackage-linux \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod a+x /opt/sqlpackage/sqlpackage \
&& rm sqlpackage.zip
fi
- id: sqlpackage
name: Running SqlPackage tool
shell: bash
run: |
echo "::debug::Ensuring target folder '${{ inputs.outputpath }}'"
mkdir -p ${{ inputs.outputpath }}
echo "::debug::Looking for dacpac files at '${{ inputs.sourcepath }}'"
PACKAGE_PATHS=$(find ${{ inputs.sourcepath }} -name '*.dacpac' -exec basename {} \;)
for PACKAGE in $PACKAGE_PATHS
do
echo "::debug::Runing ${{ inputs.action }} on package $PACKAGE"
SQLPACKAGE_CMD="/opt/sqlpackage/sqlpackage \
/Action:${{ inputs.action }} \
/SourceFile:${{ inputs.sourcepath }}/$PACKAGE \
/Profile:${{ inputs.profile }}"
if [[ '${{ inputs.database-server }}' != '' ]]; then
SQLPACKAGE_CMD="$SQLPACKAGE_CMD \
/TargetServerName:${{ inputs.database-server }}"
fi
if [[ '${{ inputs.database-name }}' != '' ]]; then
SQLPACKAGE_CMD="$SQLPACKAGE_CMD \
/TargetDatabaseName:${{ inputs.database-name }}"
fi
if [[ '${{ inputs.authtoken }}' != '' ]]; then
SQLPACKAGE_CMD="$SQLPACKAGE_CMD \
/AccessToken:${{ inputs.authtoken }}"
fi
if [[ '${{ inputs.action }}' != 'Publish' ]]; then
PACKAGE_NAME="${PACKAGE%.*}"
SQLPACKAGE_CMD="$SQLPACKAGE_CMD \
/OutputPath:${{ inputs.outputpath }}/$PACKAGE_NAME.${{ inputs.outputfile }} \
/OverwriteFiles:True"
fi
echo "::debug::SqlPackage intruction is '$SQLPACKAGE_CMD'"
eval $SQLPACKAGE_CMD
done
Action ID: marketplace/axel-op/docker-labels-retriever
Author: axel-op
Publisher: axel-op
Repository: github.com/axel-op/docker-labels-retriever
Inspects labels of remote images on Docker Hub, GitHub Packages or Google Container Registry, without pulling them
| Name | Required | Description |
|---|---|---|
registry |
Required | Registry of the image. |
image |
Required | Docker image to inspect. |
accessToken |
Optional | Token to use to authenticate. |
dockerHubUsername |
Optional | Username of a Docker Hub account for private Docker Hub images. |
hostname |
Optional | Hostname for Google Container Registry. |
name: "Docker Labels Retriever"
description: "Inspects labels of remote images on Docker Hub, GitHub Packages or Google Container Registry, without pulling them"
author: "axel-op"
branding:
icon: "tag"
color: "blue"
inputs:
registry:
description: "Registry of the image."
required: true
image:
description: "Docker image to inspect."
required: true
accessToken:
description: "Token to use to authenticate."
required: false
dockerHubUsername:
description: "Username of a Docker Hub account for private Docker Hub images."
required: false
hostname:
description: "Hostname for Google Container Registry."
required: false
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/rhysd/paths-filter
Author: Michal Dorner <dorner.michal@gmail.com>
Publisher: rhysd
Repository: github.com/rhysd/paths-filter
Execute your workflow steps only if relevant files are modified.
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub Access Token Default: ${{ github.token }} |
working-directory |
Optional | Relative path under $GITHUB_WORKSPACE where the repository was checked out. |
ref |
Optional | Git reference (e.g. branch name) from which the changes will be detected. This option is ignored if action is triggered by pull_request event. |
base |
Optional | Git reference (e.g. branch name) against which the changes will be detected. Defaults to repository default branch (e.g. master). If it references same branch it was pushed to, changes are detected against the most recent commit before the push. This option is ignored if action is triggered by pull_request event. |
filters |
Required | Path to the configuration file or YAML string with filters definition |
list-files |
Optional | Enables listing of files matching the filter:
'none' - Disables listing of matching files (default).
'csv' - Coma separated list of filenames.
If needed it uses double quotes to wrap filename with unsafe characters.
'json' - Serialized as JSON array.
'shell' - Space delimited list usable as command line argument list in linux shell.
If needed it uses single or double quotes to wrap filename with unsafe characters.
'escape'- Space delimited list usable as command line argument list in linux shell.
Backslash escapes every potentially unsafe character.
Default: none |
initial-fetch-depth |
Optional | How many commits are initially fetched from base branch.
If needed, each subsequent fetch doubles the previously requested number of commits
until the merge-base is found or there are no more commits in the history.
This option takes effect only when changes are detected using git against different base branch.
Default: 100 |
| Name | Description |
|---|---|
changes |
JSON array with names of all filters matching any of changed files |
name: 'Paths Changes Filter'
description: 'Execute your workflow steps only if relevant files are modified.'
author: 'Michal Dorner <dorner.michal@gmail.com>'
inputs:
token:
description: 'GitHub Access Token'
required: false
default: ${{ github.token }}
working-directory:
description: 'Relative path under $GITHUB_WORKSPACE where the repository was checked out.'
required: false
ref:
description: |
Git reference (e.g. branch name) from which the changes will be detected.
This option is ignored if action is triggered by pull_request event.
required: false
base:
description: |
Git reference (e.g. branch name) against which the changes will be detected. Defaults to repository default branch (e.g. master).
If it references same branch it was pushed to, changes are detected against the most recent commit before the push.
This option is ignored if action is triggered by pull_request event.
required: false
filters:
description: 'Path to the configuration file or YAML string with filters definition'
required: true
list-files:
description: |
Enables listing of files matching the filter:
'none' - Disables listing of matching files (default).
'csv' - Coma separated list of filenames.
If needed it uses double quotes to wrap filename with unsafe characters.
'json' - Serialized as JSON array.
'shell' - Space delimited list usable as command line argument list in linux shell.
If needed it uses single or double quotes to wrap filename with unsafe characters.
'escape'- Space delimited list usable as command line argument list in linux shell.
Backslash escapes every potentially unsafe character.
required: false
default: none
initial-fetch-depth:
description: |
How many commits are initially fetched from base branch.
If needed, each subsequent fetch doubles the previously requested number of commits
until the merge-base is found or there are no more commits in the history.
This option takes effect only when changes are detected using git against different base branch.
required: false
default: '100'
outputs:
changes:
description: JSON array with names of all filters matching any of changed files
runs:
using: 'node20'
main: 'dist/index.js'
branding:
color: blue
icon: filter
Action ID: marketplace/amirisback/frogo-android-sdk
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/frogo-android-sdk
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Android-SDK'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/graalvm/setup-graalvm
Author: GraalVM Community
Publisher: graalvm
Repository: github.com/graalvm/setup-graalvm
Set up a specific version of the GraalVM JDK and add the command-line tools to the PATH
| Name | Required | Description |
|---|---|---|
java-version |
Required | Java version. See examples of supported syntax in the README file. |
java-package |
Optional | The package type (jdk or jdk+fx). Currently applies to Liberica only. Default: jdk |
distribution |
Optional | GraalVM distribution. See the list of available distributions in the README file. |
components |
Optional | Comma-separated list of GraalVM components to be installed. |
github-token |
Optional | Set it to secrets.GITHUB_TOKEN to increase rate limits when accessing the GitHub API. Defaults to github.token. Default: ${{ github.token }} |
set-java-home |
Optional | Set $JAVA_HOME to the GraalVM installation. Default: true. Default: true |
cache |
Optional | Name of the build platform to cache dependencies. It can be "maven", "gradle", or "sbt". |
check-for-updates |
Optional | Annotate jobs with update notifications, for example, when a new GraalVM release is available Default: true |
native-image-musl |
Optional | Set up musl for static image building with GraalVM Native Image. Default: false |
native-image-job-reports |
Optional | Post a job summary containing a Native Image build report. Default: false |
native-image-pr-reports |
Optional | Post a comment containing a Native Image build report on pull requests. Default: false |
native-image-pr-reports-update-existing |
Optional | Instead of posting another comment, update an existing PR comment with the latest Native Image build report. Default: false |
native-image-enable-sbom |
Optional | Automatically generate an SBOM and submit it to the GitHub dependency submission API for vulnerability and dependency tracking. Default: false |
version |
Optional | GraalVM version (release, latest, dev). |
gds-token |
Optional | Download token for the GraalVM Download Service. If provided, the action will set up GraalVM Enterprise Edition. |
| Name | Description |
|---|---|
cache-hit |
A boolean value to indicate an exact match was found for the primary key |
name: 'GitHub Action for GraalVM'
description: 'Set up a specific version of the GraalVM JDK and add the command-line tools to the PATH'
author: 'GraalVM Community'
branding:
icon: 'terminal'
color: 'blue'
inputs:
java-version:
required: true
description: 'Java version. See examples of supported syntax in the README file.'
java-package:
description: 'The package type (jdk or jdk+fx). Currently applies to Liberica only.'
required: false
default: 'jdk'
distribution:
description: 'GraalVM distribution. See the list of available distributions in the README file.'
required: false
default: ''
components:
required: false
description: 'Comma-separated list of GraalVM components to be installed.'
default: ''
github-token:
required: false
description:
'Set it to secrets.GITHUB_TOKEN to increase rate limits when accessing the GitHub API. Defaults to github.token.'
default: ${{ github.token }}
set-java-home:
required: false
description: 'Set $JAVA_HOME to the GraalVM installation. Default: true.'
default: 'true'
cache:
description: 'Name of the build platform to cache dependencies. It can be "maven", "gradle", or "sbt".'
required: false
check-for-updates:
required: false
description: 'Annotate jobs with update notifications, for example, when a new GraalVM release is available'
default: 'true'
native-image-musl:
required: false
description: 'Set up musl for static image building with GraalVM Native Image.'
default: 'false'
native-image-job-reports:
required: false
description: 'Post a job summary containing a Native Image build report.'
default: 'false'
native-image-pr-reports:
required: false
description: 'Post a comment containing a Native Image build report on pull requests.'
default: 'false'
native-image-pr-reports-update-existing:
required: false
description:
'Instead of posting another comment, update an existing PR comment with the latest Native Image build report.'
default: 'false'
native-image-enable-sbom:
required: false
description:
'Automatically generate an SBOM and submit it to the GitHub dependency submission API for vulnerability and
dependency tracking.'
default: 'false'
version:
required: false
description: 'GraalVM version (release, latest, dev).'
default: ''
gds-token:
required: false
description:
'Download token for the GraalVM Download Service. If provided, the action will set up GraalVM Enterprise Edition.'
outputs:
cache-hit:
description: 'A boolean value to indicate an exact match was found for the primary key'
runs:
using: 'node20'
main: 'dist/main.js'
post: 'dist/cleanup.js'
post-if: 'success()'
Action ID: marketplace/peter-evans/enable-pull-request-automerge
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/enable-pull-request-automerge
A GitHub action to enable auto-merge on a pull request
| Name | Required | Description |
|---|---|---|
token |
Optional | GITHUB_TOKEN or a `repo` scoped Personal Access Token (PAT) Default: ${{ github.token }} |
repository |
Optional | The target GitHub repository containing the pull request Default: ${{ github.repository }} |
pull-request-number |
Required | The number of the target pull request |
merge-method |
Optional | The merge method to use. `merge`, `rebase` or `squash`. Default: merge |
name: 'Enable Pull Request Automerge'
description: 'A GitHub action to enable auto-merge on a pull request'
inputs:
token:
description: 'GITHUB_TOKEN or a `repo` scoped Personal Access Token (PAT)'
default: ${{ github.token }}
repository:
description: 'The target GitHub repository containing the pull request'
default: ${{ github.repository }}
pull-request-number:
description: 'The number of the target pull request'
required: true
merge-method:
description: 'The merge method to use. `merge`, `rebase` or `squash`.'
default: merge
runs:
using: composite
steps:
- name: Lowercase
id: lowercase
shell: bash
run: |
echo merge-method=$(echo "${{ inputs.merge-method }}" | tr '[:upper:]' '[:lower:]') >> $GITHUB_OUTPUT
- name: Enable automerge
shell: bash
run: gh pr merge -R "${{ inputs.repository }}" --${{ steps.lowercase.outputs.merge-method }} --auto "${{ inputs.pull-request-number }}"
env:
GH_TOKEN: ${{ inputs.token }}
branding:
icon: 'git-pull-request'
color: 'gray-dark'
Action ID: marketplace/skx/github-action-build
Author: Steve Kemp
Publisher: skx
Repository: github.com/skx/github-action-build
Build a project, creating artifacts
| Name | Required | Description |
|---|---|---|
builder |
Optional | The path to the build-script to run, within the repository. Default: .github/build |
name: 'github-action-build'
description: 'Build a project, creating artifacts'
author: 'Steve Kemp'
branding:
icon: settings
color: black
inputs:
builder:
description: 'The path to the build-script to run, within the repository.'
default: '.github/build'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.builder }}
Action ID: marketplace/lukka/set-shell-env
Author: Luca Cappa https://github.com/lukka
Publisher: lukka
Repository: github.com/lukka/set-shell-env
Exports as workflow variables a subset of the shell environment variables, accessible in any step using 'env.VAR-NAME'
| Name | Required | Description |
|---|---|---|
shell |
Optional | < The command to execute to start the shell. |
args |
Optional | < Arguments to be provided to the shell to list the environment variables in form "name=value", as the 'env' Un*x command does. |
filter |
Optional | < A regular expression that selects which environment variables (defined in the specified shell) to be set in the workflow. |
# Copyright (c) 2020 Luca Cappa
# Released under the term specified in file LICENSE.txt
# SPDX short identifier: MIT
name: 'set-shell-env'
description: "Exports as workflow variables a subset of the shell environment variables, accessible in any step using 'env.VAR-NAME'"
author: 'Luca Cappa https://github.com/lukka'
inputs:
shell:
required: false
description: <
The command to execute to start the shell.
args:
required: false
description: <
Arguments to be provided to the shell to list the environment variables in form "name=value", as the 'env' Un*x command does.
filter:
required: false
description: <
A regular expression that selects which environment variables (defined in the specified shell) to be set in the workflow.
runs:
using: 'node16'
main: './dist/index.js'
branding:
icon: 'terminal'
color: 'green'
Action ID: marketplace/julia-actions/julia-format
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-format
Formats your Julia files and suggest changes
| Name | Required | Description |
|---|---|---|
version |
Optional | Version of JuliaFormatter.jl. Examples: 1, 1.0, 1.0.44 Default: 1 |
suggestion-label |
Optional | If set, suggestions will only be shown for PRs with this label applied. |
name: 'Format suggestion with JuliaFormatter.jl'
description: "Formats your Julia files and suggest changes"
inputs:
version:
description: 'Version of JuliaFormatter.jl. Examples: 1, 1.0, 1.0.44'
default: '1'
suggestion-label:
description: 'If set, suggestions will only be shown for PRs with this label applied.'
default: ''
runs:
using: "composite"
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
- uses: julia-actions/cache@v2
- name: Install JuliaFormatter
shell: julia --color=yes {0}
run: |
import Pkg
# Get the desired JuliaFormatter version number:
_version_original = get(ENV, "jf-version", "1")
# Strip leading and lagging whitespace.
# We need to then convert back to String, because Pkg won't accept SubString:
version = convert(String, strip(_version_original))::String
if isempty(version)
msg = "The version input cannot be empty"
error(msg)
end
# Make sure to specify the UUID of the package:
p = Pkg.PackageSpec(
name = "JuliaFormatter",
uuid = "98e50ef6-434e-11e9-1051-2b60c6c9e899",
version = version,
)
# Install the package:
Pkg.add(p)
env:
jf-version: ${{ inputs.version }}
- name: Format
shell: julia --color=yes {0}
run: |
import JuliaFormatter
JuliaFormatter.format(".")
- name: Check for formatting errors
shell: bash
run: |
output=$(git diff --name-only)
if [ "$output" != "" ]; then
>&2 echo "Some files have not been formatted !!!"
echo "$output"
exit 1
fi
- name: Suggest
uses: reviewdog/action-suggester@v1
if: ${{ failure() && (inputs.suggestion-label == '' || contains( github.event.pull_request.labels.*.name, inputs.suggestion-label )) }}
with:
tool_name: JuliaFormatter
fail_level: error
Action ID: marketplace/wearerequired/action-wordpress-plugin-asset-update
Author: 10up
Publisher: wearerequired
Repository: github.com/wearerequired/action-wordpress-plugin-asset-update
Deploy readme and asset updates to the WordPress Plugin Repository
name: 'WordPress Plugin Readme/Assets Update'
description: 'Deploy readme and asset updates to the WordPress Plugin Repository'
author: '10up'
branding:
icon: 'upload-cloud'
color: 'blue'
runs:
using: 'composite'
steps:
- id: deploy
run: ${{ github.action_path }}/deploy.sh
shell: bash
Action ID: marketplace/peter-evans/close-pull
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/close-pull
A GitHub action to close a pull request and optionally delete its branch
| Name | Required | Description |
|---|---|---|
token |
Optional | GITHUB_TOKEN or a repo scoped PAT Default: ${{ github.token }} |
repository |
Optional | The GitHub repository containing the pull request Default: ${{ github.repository }} |
pull-request-number |
Optional | The number of the pull request to close Default: ${{ github.event.number }} |
comment |
Optional | A comment to make on the pull request before closing |
delete-branch |
Optional | Delete the pull request branch |
name: 'Close Pull'
description: 'A GitHub action to close a pull request and optionally delete its branch'
inputs:
token:
description: 'GITHUB_TOKEN or a repo scoped PAT'
default: ${{ github.token }}
repository:
description: 'The GitHub repository containing the pull request'
default: ${{ github.repository }}
pull-request-number:
description: 'The number of the pull request to close'
default: ${{ github.event.number }}
comment:
description: 'A comment to make on the pull request before closing'
delete-branch:
description: 'Delete the pull request branch'
default: false
runs:
using: composite
steps:
- name: Set Parameters
id: params
shell: bash
run: |
if [ -n "${{ inputs.comment }}" ]; then
comment="--comment \"${{ inputs.comment }}\""
delimiter="$(openssl rand -hex 8)"
echo "comment<<$delimiter" >> $GITHUB_OUTPUT
echo "$comment" >> $GITHUB_OUTPUT
echo "$delimiter" >> $GITHUB_OUTPUT
fi
if [ "${{ inputs.delete-branch }}" = true ]; then
echo delete-branch="--delete-branch" >> $GITHUB_OUTPUT
fi
- name: Close Pull
shell: bash
run: |
gh pr close -R "${{ inputs.repository }}" \
${{ steps.params.outputs.comment }} \
${{ steps.params.outputs.delete-branch }} \
"${{ inputs.pull-request-number }}"
env:
GH_TOKEN: ${{ inputs.token }}
branding:
icon: 'git-pull-request'
color: 'gray-dark'
Action ID: marketplace/aws-actions/setup-sam
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/setup-sam
Setup AWS SAM CLI and add it to the PATH
| Name | Required | Description |
|---|---|---|
version |
Optional | The AWS SAM CLI version to install |
python |
Optional | The Python interpreter to use for AWS SAM CLI |
use-installer |
Optional | Set to true to install using native installers instead of pip |
token |
Optional | Authentication token to be used to call GITHUB Apis |
name: "Setup AWS SAM CLI"
description: "Setup AWS SAM CLI and add it to the PATH"
branding:
icon: "terminal"
color: "orange"
inputs:
version:
description: "The AWS SAM CLI version to install"
required: false
python:
description: "The Python interpreter to use for AWS SAM CLI"
required: false
use-installer:
description: "Set to true to install using native installers instead of pip"
required: false
default: false
token:
description: "Authentication token to be used to call GITHUB Apis"
required: false
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/azure/k8s-bake
Author: Unknown
Publisher: azure
Repository: github.com/azure/k8s-bake
Use this action to bake manifest file to be used for deployments using helm, kustomize or kompose
| Name | Required | Description |
|---|---|---|
renderEngine |
Required | Acceptable values: helm or kompose or kustomize |
helmChart |
Optional | Required if renderEngine == helm. Helm chart to bake. |
namespace |
Optional | Relevant if renderEngine == helm. Namespace to be used for Helm option. |
arguments |
Optional | Acceptable values: helm or kustomize. Arguments to be passed to the Helm or kustomize template command |
overrideFiles |
Optional | Relevant if renderEngine == helm. Array of path to override files. Each path should be mentioned on a newline |
overrides |
Optional | Relevant if renderEngine == helm. Override values to set. |
releaseName |
Optional | Relevant if renderEngine == helm. Release name to be used for Helm option. |
kustomizationPath |
Optional | Required if renderEngine == kustomize. Path to directory or the Git repository containing kustomization.yaml file. |
dockerComposeFile |
Optional | Required if renderEngine == kompose. Path(s) to Docker compose files |
helm-version |
Optional | Version of helm. Installs a specific version of helm binary. Supports semver ranges like ^3.0.0 Default: ^3.0.0 |
kubectl-version |
Optional | Version of kubectl. Installs a specific version of kubectl binary |
kompose-version |
Optional | Version of kompose. Installs a specific version of kompose binary |
silent |
Optional | When set to true, the output of the bake command would not be shown. |
| Name | Description |
|---|---|
manifestsBundle |
The location of the manifest bundles created by bake action |
name: 'Kubernetes bake'
description: 'Use this action to bake manifest file to be used for deployments using helm, kustomize or kompose'
inputs:
renderEngine:
description: 'Acceptable values: helm or kompose or kustomize'
required: true
helmChart:
description: 'Required if renderEngine == helm. Helm chart to bake.'
required: false
namespace:
description: 'Relevant if renderEngine == helm. Namespace to be used for Helm option.'
arguments:
description: 'Acceptable values: helm or kustomize. Arguments to be passed to the Helm or kustomize template command'
required: false
overrideFiles:
description: 'Relevant if renderEngine == helm. Array of path to override files. Each path should be mentioned on a newline'
required: false
overrides:
description: 'Relevant if renderEngine == helm. Override values to set.'
required: false
releaseName:
description: 'Relevant if renderEngine == helm. Release name to be used for Helm option.'
required: false
kustomizationPath:
description: 'Required if renderEngine == kustomize. Path to directory or the Git repository containing kustomization.yaml file.'
required: false
dockerComposeFile:
description: 'Required if renderEngine == kompose. Path(s) to Docker compose files'
required: false
helm-version:
description: 'Version of helm. Installs a specific version of helm binary. Supports semver ranges like ^3.0.0'
required: false
default: '^3.0.0'
kubectl-version:
description: 'Version of kubectl. Installs a specific version of kubectl binary'
required: false
kompose-version:
description: 'Version of kompose. Installs a specific version of kompose binary'
required: false
silent:
description: 'When set to true, the output of the bake command would not be shown.'
required: false
outputs:
manifestsBundle:
description: 'The location of the manifest bundles created by bake action'
branding:
color: 'green' # optional, decorates the entry in the GitHub Marketplace
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/dflook/tofu-test
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-test
Execute automated tests for an OpenTofu module
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu module under test Default: . |
test_directory |
Optional | The directory within the module path that contains the test files. |
test_filter |
Optional | The test files to run, one per line. If not specified, all test files in the `test_directory` will be run. The are paths relative to the module path. |
variables |
Optional | Variables to set for the tests. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `no-tests` - No tests were found to run. - `tests-failed` - One or more tests failed. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run steps. |
name: tofu-test
description: Execute automated tests for an OpenTofu module
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu module under test
required: false
default: "."
test_directory:
description: The directory within the module path that contains the test files.
required: false
default: ""
test_filter:
description: |
The test files to run, one per line.
If not specified, all test files in the `test_directory` will be run.
The are paths relative to the module path.
required: false
default: ""
variables:
description: |
Variables to set for the tests. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `no-tests` - No tests were found to run.
- `tests-failed` - One or more tests failed.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run steps.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/test.sh
branding:
icon: globe
color: purple
Automate your GitHub workflows using Azure CLI scripts.
| Name | Required | Description |
|---|---|---|
inlineScript |
Required | Specify the script here |
azcliversion |
Optional | Azure CLI version to be used to execute the script. If not provided, latest version is used Default: agentazcliversion |
# Azure CLI Action
name: 'Azure CLI Action'
description: 'Automate your GitHub workflows using Azure CLI scripts.'
inputs:
inlineScript:
description: 'Specify the script here'
required: true
azcliversion:
description: 'Azure CLI version to be used to execute the script. If not provided, latest version is used'
required: false
default: 'agentazcliversion'
branding:
icon: 'login.svg'
color: 'blue'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/github/ossar-action
Author: GitHub
Publisher: github
Repository: github.com/github/ossar-action
Run open source security static analysis tools without the added complexity with OSSAR (Open Source Static Analysis Runner)
| Name | Required | Description |
|---|---|---|
config |
Optional | A file path to a .gdnconfig file. |
policy |
Optional | The name of the well known policy to use. If empty, defaults to the policy/github.gdnpolicy file in the action repo. |
| Name | Description |
|---|---|
sarifFile |
A file path to a SARIF results file. |
name: 'ossar-action'
description: 'Run open source security static analysis tools without the added complexity with OSSAR (Open Source Static Analysis Runner)'
author: 'GitHub'
branding:
icon: 'shield'
color: 'black'
inputs:
config:
description: A file path to a .gdnconfig file.
policy:
description: The name of the well known policy to use. If empty, defaults to the policy/github.gdnpolicy file in the action repo.
outputs:
sarifFile:
description: A file path to a SARIF results file.
runs:
using: 'node20'
main: 'lib/action.js'
Action ID: marketplace/wearerequired/slack-messaging-action
Author: Unknown
Publisher: wearerequired
Repository: github.com/wearerequired/slack-messaging-action
Send any messages from GitHub Actions to Slack.
| Name | Required | Description |
|---|---|---|
bot_token |
Required | The Slack bot token. |
channel |
Optional | The Slack channel name. Required if no `channel_id` provided. |
channel_id |
Optional | The Slack channel ID. Required if no `channel` provided. |
payload |
Required | The JSON payload of a message to send. `channel` and `ts` are set to the values of the respective inputs. |
message_id |
Optional | The ID of the existing Slack message to update. |
| Name | Description |
|---|---|
message_id |
The unique timestamp identifier of the Slack message sent. |
name: 'Slack Messaging'
description: 'Send any messages from GitHub Actions to Slack.'
branding:
icon: message-circle
color: purple
inputs:
bot_token:
description: 'The Slack bot token.'
required: true
channel:
description: 'The Slack channel name. Required if no `channel_id` provided.'
required: false
channel_id:
description: 'The Slack channel ID. Required if no `channel` provided.'
required: false
payload:
description: 'The JSON payload of a message to send. `channel` and `ts` are set to the values of the respective inputs.'
required: true
message_id:
description: 'The ID of the existing Slack message to update.'
required: false
outputs:
message_id:
description: 'The unique timestamp identifier of the Slack message sent.'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/azure/azure-resource-login-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/azure-resource-login-action
Gets an Access Token for an specific Azure resource
| Name | Required | Description |
|---|---|---|
creds |
Required | Azure credentials for the service principal to log in |
resource-url |
Optional | Canonical resource url for what you want the token for |
resource-type-name |
Optional | Optional resouce type name, supported values: AadGraph, AnalysisServices, Arm, Attestation, Batch, DataLake, KeyVault, OperationalInsights, ResourceManager, Storage, Synapse. Default value is Arm if not specified. Default: Arm |
| Name | Description |
|---|---|
token |
Authentication Token |
name: azure-resource-login
description: 'Gets an Access Token for an specific Azure resource'
inputs:
creds:
description: 'Azure credentials for the service principal to log in'
required: true
resource-url:
description: 'Canonical resource url for what you want the token for'
required: false
default: ''
resource-type-name:
description: 'Optional resouce type name, supported values: AadGraph, AnalysisServices, Arm, Attestation, Batch, DataLake, KeyVault, OperationalInsights, ResourceManager, Storage, Synapse. Default value is Arm if not specified.'
required: false
default: 'Arm'
outputs:
token:
description: "Authentication Token"
value: ${{ steps.adquire-token.outputs.token }}
runs:
using: "composite"
steps:
- name: Logining in into Azure
uses: Azure/login@v2.1.1
with:
creds: ${{ inputs.creds }}
enable-AzPSSession: true
- id: adquire-token
name: Adquiring access token
shell: pwsh
run: |
Set-PSRepository PSGallery -InstallationPolicy Trusted
Install-Module -Name Az.Accounts -AllowClobber
$context = Get-AzContext
if ("${{ inputs.resource-url }}" -ne "")
{
$resource = Get-AzAccessToken -ResourceUrl ${{ inputs.resource-url }} -DefaultProfile $context
}
else
{
$resource = Get-AzAccessToken -ResourceTypeName ${{ inputs.resource-type-name }} -DefaultProfile $context
}
$resourceToken = $resource.Token
write-host "::debug::Setting 'token' variable to $resourceToken"
write-host "::set-output name=token::$resourceToken"
Action ID: marketplace/lowlighter/setup-velociraptor
Author: JurassiScripts
Publisher: lowlighter
Repository: github.com/lowlighter/setup-velociraptor
Install Velociraptor scripts manager
| Name | Required | Description |
|---|---|---|
checkout |
Optional | Whether to checkout repository first Default: True |
deno-version |
Optional | Deno version to use Default: 1.x |
velociraptor-version |
Optional | Velociraptor version to use Default: latest |
velociraptor-alias |
Optional | Velociraptor alias name Default: vr |
| Name | Description |
|---|---|
deno-is-canary |
Whether installed Deno version is a canary version |
deno-version |
Deno version that was installed |
velociraptor-version |
Velociraptor version that was installed |
name: Setup Velociraptor
description: Install Velociraptor scripts manager
author: JurassiScripts
branding:
icon: terminal
color: blue
inputs:
checkout:
description: Whether to checkout repository first
default: true
required: false
deno-version:
description: Deno version to use
default: 1.x
required: false
velociraptor-version:
description: Velociraptor version to use
default: latest
required: false
velociraptor-alias:
description: Velociraptor alias name
default: vr
required: false
outputs:
deno-is-canary:
description: Whether installed Deno version is a canary version
value: ${{ steps.deno.outputs.is-canary }}
deno-version:
description: Deno version that was installed
value: ${{ steps.deno.outputs.deno-version }}
velociraptor-version:
description: Velociraptor version that was installed
value: ${{ steps.velociraptor.outputs.velociraptor-version }}
runs:
using: composite
steps:
- name: run actions/checkout@v2
uses: actions/checkout@v2
if: ${{ inputs.checkout }}
- name: run denoland/setup-deno@v1
id: deno
uses: denoland/setup-deno@v1
if: ${{ inputs.deno-version }}
with:
deno-version: ${{ inputs.deno-version }}
- name: setup velociraptor
shell: bash
run: |
case "${{ inputs.velociraptor-version }}" in
"latest") ;; "") ;;
*) VERSION=@${{ inputs.velociraptor-version }} ;;
esac
deno install --allow-all --name ${{ inputs.velociraptor-alias }} https://deno.land/x/velociraptor$VERSION/cli.ts
echo "/home/runner/.deno/bin" >> $GITHUB_PATH
- name: add velociraptor to path (macos)
shell: bash
if: ${{ contains(runner.os, 'macos') }}
run: |
echo "/Users/runner/.deno/bin" >> $GITHUB_PATH
- name: add velociraptor to path (windows)
shell: powershell
if: ${{ contains(runner.os, 'windows') }}
run: |
echo "C:\Users\runneradmin\.deno\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: compute output velociraptor-version
id: velociraptor
shell: bash
run: |
echo "::set-output name=velociraptor-version::$(${{ inputs.velociraptor-alias }} --version)"
Action ID: marketplace/julia-actions/julia-run-testitems
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-run-testitems
Run tests that are based on the test item framework
| Name | Required | Description |
|---|---|---|
juliaup-channel |
Optional | |
results-path |
Optional | |
env |
Optional |
name: 'Run Julia test items'
description: 'Run tests that are based on the test item framework'
inputs:
juliaup-channel:
required: false
results-path:
type: string
required: false
env:
required: false
default: ""
runs:
using: "composite"
steps:
- name: Compute Manifest hash
id: project-hash
shell: pwsh
run: |
$ourHash = Get-FileHash -LiteralPath "$env:GITHUB_ACTION_PATH\Manifest.toml"
"MANIFEST_HASH=$($ourHash.Hash)" | Out-File -FilePath $env:GITHUB_OUTPUT -Append
- name: Install release channel
shell: bash
run: juliaup add release
- name: Check Julia version
shell: bash
id: julia-version
run: |
echo "JULIA_VERSION=$(julia +release -v)" >> $GITHUB_OUTPUT
- uses: actions/cache@v4
id: cache-project
with:
path: ${{ runner.tool_cache }}/julia-run-testitems-depot
key: julia-run-testitems-cache-${{ runner.os }}-${{ steps.julia-version.outputs.JULIA_VERSION }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Install and precompile
if: steps.cache-project.outputs.cache-hit != 'true'
run: julia +release -e 'import Pkg; Pkg.instantiate()'
shell: bash
env:
JULIA_PROJECT: ${{ github.action_path }}
JULIA_DEPOT_PATH: ${{ runner.tool_cache }}/julia-run-testitems-depot
- uses: actions/cache/save@v4
if: steps.cache-project.outputs.cache-hit != 'true'
with:
path: ${{ runner.tool_cache }}/julia-run-testitems-depot
key: julia-run-testitems-cache-${{ runner.os }}-${{ steps.julia-version.outputs.JULIA_VERSION }}-${{ steps.project-hash.outputs.MANIFEST_HASH }}
- name: Run test items
run: julia +release --project=${{ github.action_path }} ${{ github.action_path }}/main.jl ${{ runner.debug==1 && 'debug' || 'nodebug' }}
shell: pwsh
env:
JULIA_DEPOT_PATH: ${{ runner.tool_cache }}/julia-run-testitems-depot
TEST_JULIAUP_CHANNEL: ${{ inputs.juliaup-channel }}
TEST_ENV: ${{ inputs.env }}
RESULTS_PATH: ${{ inputs.results-path }}
Action ID: marketplace/srt32/ai-inference
Author: GitHub
Publisher: srt32
Repository: github.com/srt32/ai-inference
Generate an AI response based on a provided prompt
| Name | Required | Description |
|---|---|---|
prompt |
Optional | The prompt for the model |
prompt-file |
Optional | Path to a file containing the prompt (supports .txt and .prompt.yml formats) |
input |
Optional | Template variables in YAML format for .prompt.yml files |
file_input |
Optional | Template variables in YAML format mapping variable names to file paths. The file contents will be used for templating. |
model |
Optional | The model to use Default: openai/gpt-4o |
endpoint |
Optional | The endpoint to use Default: https://models.github.ai/inference |
system-prompt |
Optional | The system prompt for the model Default: You are a helpful assistant |
system-prompt-file |
Optional | Path to a file containing the system prompt |
max-tokens |
Optional | The maximum number of tokens to generate Default: 200 |
token |
Optional | The token to use Default: ${{ github.token }} |
enable-github-mcp |
Optional | Enable Model Context Protocol integration with GitHub tools Default: false |
github-mcp-token |
Optional | The token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work. |
| Name | Description |
|---|---|
response |
The response from the model |
response-file |
The file path where the response is saved |
name: 'AI Inference'
description: Generate an AI response based on a provided prompt
author: 'GitHub'
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: 'message-square'
color: red
# Define your inputs here.
inputs:
prompt:
description: The prompt for the model
required: false
default: ''
prompt-file:
description: Path to a file containing the prompt (supports .txt and .prompt.yml
formats)
required: false
default: ''
input:
description: Template variables in YAML format for .prompt.yml files
required: false
default: ''
file_input:
description: Template variables in YAML format mapping variable names to file paths. The file contents will be used for templating.
required: false
default: ''
model:
description: The model to use
required: false
default: 'openai/gpt-4o'
endpoint:
description: The endpoint to use
required: false
default: 'https://models.github.ai/inference'
system-prompt:
description: The system prompt for the model
required: false
default: 'You are a helpful assistant'
system-prompt-file:
description: Path to a file containing the system prompt
required: false
default: ''
max-tokens:
description: The maximum number of tokens to generate
required: false
default: '200'
token:
description: The token to use
required: false
default: ${{ github.token }}
enable-github-mcp:
description: Enable Model Context Protocol integration with GitHub tools
required: false
default: 'false'
github-mcp-token:
description: The token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work.
required: false
default: ''
# Define your outputs here.
outputs:
response:
description: The response from the model
response-file:
description: The file path where the response is saved
runs:
using: node24
main: dist/index.js
Action ID: marketplace/aws-actions/action-cloudwatch-metrics
Author: ROS Tooling Working Group
Publisher: aws-actions
Repository: github.com/aws-actions/action-cloudwatch-metrics
Publishes a metric to Amazon CloudWatch.
| Name | Required | Description |
|---|---|---|
metric-name |
Optional | Name of the metric to be published Default: Builds |
metric-value |
Optional | Value of the metric to be published
true, and false are respectfully converted to 1.0, and 0.0 automatically Default: ${{ job.status == 'success' }} |
metric-dimensions |
Optional | Dimension of the metric to be published, as a JSON object Default: [
{ "Name": "github.event_name", "Value": "${{ github.event_name }}" },
{ "Name": "github.ref", "Value": "${{ github.ref }}" },
{ "Name": "github.repository", "Value": "${{ github.repository }}" }
] |
metric-data |
Optional | Path to a JSON file containing CloudWatch-compatible metrics, according to https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudwatch/put-metric-data.html If this input is specified, the action will ignore the single-value inputs "metric-name", "metric-value", "metric-dimensions" and only publish the data from this file. |
namespace |
Optional | The namespace for the metric data. Default: ActionCloudWatchMetrics |
name: 'Post Amazon Cloudwatch Metrics'
description: 'Publishes a metric to Amazon CloudWatch.'
author: 'ROS Tooling Working Group'
branding:
icon: 'upload-cloud'
color: 'gray-dark'
inputs:
metric-name:
description: 'Name of the metric to be published'
default: 'Builds'
metric-value:
description: >-
Value of the metric to be published
true, and false are respectfully converted to 1.0, and 0.0 automatically
default: "${{ job.status == 'success' }}"
metric-dimensions:
description: 'Dimension of the metric to be published, as a JSON object'
default: >-
[
{ "Name": "github.event_name", "Value": "${{ github.event_name }}" },
{ "Name": "github.ref", "Value": "${{ github.ref }}" },
{ "Name": "github.repository", "Value": "${{ github.repository }}" }
]
metric-data:
description: >-
Path to a JSON file containing CloudWatch-compatible metrics, according to
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudwatch/put-metric-data.html
If this input is specified, the action will ignore the single-value inputs "metric-name", "metric-value", "metric-dimensions" and only publish the data from this file.
required: false
namespace:
description: 'The namespace for the metric data.'
default: 'ActionCloudWatchMetrics'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/actions/stale
Author: GitHub
Publisher: actions
Repository: github.com/actions/stale
Close issues and pull requests with no recent activity
| Name | Required | Description |
|---|---|---|
repo-token |
Optional | Token for the repository. Can be passed in using `{{ secrets.GITHUB_TOKEN }}`. Default: ${{ github.token }} |
stale-issue-message |
Optional | The message to post on the issue when tagging it. If none provided, will not mark issues stale. |
stale-pr-message |
Optional | The message to post on the pull request when tagging it. If none provided, will not mark pull requests stale. |
close-issue-message |
Optional | The message to post on the issue when closing it. If none provided, will not comment when closing an issue. |
close-pr-message |
Optional | The message to post on the pull request when closing it. If none provided, will not comment when closing a pull requests. |
days-before-stale |
Optional | The number of days old an issue or a pull request can be before marking it stale. Set to -1 to never mark issues or pull requests as stale automatically. Default: 60 |
days-before-issue-stale |
Optional | The number of days old an issue can be before marking it stale. Set to -1 to never mark issues as stale automatically. Override "days-before-stale" option regarding only the issues. |
days-before-pr-stale |
Optional | The number of days old a pull request can be before marking it stale. Set to -1 to never mark pull requests as stale automatically. Override "days-before-stale" option regarding only the pull requests. |
days-before-close |
Optional | The number of days to wait to close an issue or a pull request after it being marked stale. Set to -1 to never close stale issues or pull requests. Default: 7 |
days-before-issue-close |
Optional | The number of days to wait to close an issue after it being marked stale. Set to -1 to never close stale issues. Override "days-before-close" option regarding only the issues. |
days-before-pr-close |
Optional | The number of days to wait to close a pull request after it being marked stale. Set to -1 to never close stale pull requests. Override "days-before-close" option regarding only the pull requests. |
stale-issue-label |
Optional | The label to apply when an issue is stale. Default: Stale |
close-issue-label |
Optional | The label to apply when an issue is closed. |
exempt-issue-labels |
Optional | The labels that mean an issue is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2"). |
close-issue-reason |
Optional | The reason to use when closing an issue. Default: not_planned |
stale-pr-label |
Optional | The label to apply when a pull request is stale. Default: Stale |
close-pr-label |
Optional | The label to apply when a pull request is closed. |
exempt-pr-labels |
Optional | The labels that mean a pull request is exempt from being marked as stale. Separate multiple labels with commas (eg. "label1,label2"). |
exempt-milestones |
Optional | The milestones that mean an issue or a pull request is exempt from being marked as stale. Separate multiple milestones with commas (eg. "milestone1,milestone2"). |
exempt-issue-milestones |
Optional | The milestones that mean an issue is exempt from being marked as stale. Separate multiple milestones with commas (eg. "milestone1,milestone2"). Override "exempt-milestones" option regarding only the issues. |
exempt-pr-milestones |
Optional | The milestones that mean a pull request is exempt from being marked as stale. Separate multiple milestones with commas (eg. "milestone1,milestone2"). Override "exempt-milestones" option regarding only the pull requests. |
exempt-all-milestones |
Optional | Exempt all issues and pull requests with milestones from being marked as stale. Default to false. Default: false |
exempt-all-issue-milestones |
Optional | Exempt all issues with milestones from being marked as stale. Override "exempt-all-milestones" option regarding only the issues. |
exempt-all-pr-milestones |
Optional | Exempt all pull requests with milestones from being marked as stale. Override "exempt-all-milestones" option regarding only the pull requests. |
only-labels |
Optional | Only issues or pull requests with all of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels. |
any-of-labels |
Optional | Only issues or pull requests with at least one of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels. |
any-of-issue-labels |
Optional | Only issues with at least one of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels. Override "any-of-labels" option regarding only the issues. |
any-of-pr-labels |
Optional | Only pull requests with at least one of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels. Override "any-of-labels" option regarding only the pull requests. |
only-issue-labels |
Optional | Only issues with all of these labels are checked if stale. Defaults to `[]` (disabled) and can be a comma-separated list of labels. Override "only-labels" option regarding only the issues. |
only-pr-labels |
Optional | Only pull requests with all of these labels are checked if stale. Defaults to `[]` (disabled) and can be a comma-separated list of labels. Override "only-labels" option regarding only the pull requests. |
operations-per-run |
Optional | The maximum number of operations per run, used to control rate limiting (GitHub API CRUD related). Default: 30 |
remove-stale-when-updated |
Optional | Remove stale labels from issues and pull requests when they are updated or commented on. Default: true |
remove-issue-stale-when-updated |
Optional | Remove stale labels from issues when they are updated or commented on. Override "remove-stale-when-updated" option regarding only the issues. |
remove-pr-stale-when-updated |
Optional | Remove stale labels from pull requests when they are updated or commented on. Override "remove-stale-when-updated" option regarding only the pull requests. |
debug-only |
Optional | Run the processor in debug mode without actually performing any operations on live issues. Default: false |
ascending |
Optional | The order to get issues or pull requests. Defaults to false, which is descending. Default: false |
sort-by |
Optional | What to sort results by. Valid options are `created`, `updated`, and `comments`. Defaults to `created`. Default: created |
delete-branch |
Optional | Delete the git branch after closing a stale pull request. Default: false |
start-date |
Optional | The date used to skip the stale action on issue/pull request created before it (ISO 8601 or RFC 2822). |
exempt-assignees |
Optional | The assignees which exempt an issue or a pull request from being marked as stale. Separate multiple assignees with commas (eg. "user1,user2"). |
exempt-issue-assignees |
Optional | The assignees which exempt an issue from being marked as stale. Separate multiple assignees with commas (eg. "user1,user2"). Override "exempt-assignees" option regarding only the issues. |
exempt-pr-assignees |
Optional | The assignees which exempt a pull request from being marked as stale. Separate multiple assignees with commas (eg. "user1,user2"). Override "exempt-assignees" option regarding only the pull requests. |
exempt-all-assignees |
Optional | Exempt all issues and pull requests with assignees from being marked as stale. Default to false. Default: false |
exempt-all-issue-assignees |
Optional | Exempt all issues with assignees from being marked as stale. Override "exempt-all-assignees" option regarding only the issues. |
exempt-all-pr-assignees |
Optional | Exempt all pull requests with assignees from being marked as stale. Override "exempt-all-assignees" option regarding only the pull requests. |
exempt-draft-pr |
Optional | Exempt draft pull requests from being marked as stale. Default to false. Default: false |
enable-statistics |
Optional | Display some statistics at the end regarding the stale workflow (only when the logs are enabled). Default: true |
labels-to-add-when-unstale |
Optional | A comma delimited list of labels to add when an issue or pull request becomes unstale. |
labels-to-remove-when-stale |
Optional | A comma delimited list of labels to remove when an issue or pull request becomes stale. |
labels-to-remove-when-unstale |
Optional | A comma delimited list of labels to remove when an issue or pull request becomes unstale. |
ignore-updates |
Optional | Any update (update/comment) can reset the stale idle time on the issues and pull requests. Default: false |
ignore-issue-updates |
Optional | Any update (update/comment) can reset the stale idle time on the issues. Override "ignore-updates" option regarding only the issues. |
ignore-pr-updates |
Optional | Any update (update/comment) can reset the stale idle time on the pull requests. Override "ignore-updates" option regarding only the pull requests. |
include-only-assigned |
Optional | Only the issues or the pull requests with an assignee will be marked as stale automatically. Default: false |
only-issue-types |
Optional | Only issues with a matching type are processed as stale/closed. Defaults to `[]` (disabled) and can be a comma-separated list of issue types. |
| Name | Description |
|---|---|
closed-issues-prs |
List of all closed issues and pull requests. |
staled-issues-prs |
List of all staled issues and pull requests. |
name: 'Close Stale Issues'
description: 'Close issues and pull requests with no recent activity'
author: 'GitHub'
inputs:
repo-token:
description: 'Token for the repository. Can be passed in using `{{ secrets.GITHUB_TOKEN }}`.'
required: false
default: ${{ github.token }}
stale-issue-message:
description: 'The message to post on the issue when tagging it. If none provided, will not mark issues stale.'
required: false
stale-pr-message:
description: 'The message to post on the pull request when tagging it. If none provided, will not mark pull requests stale.'
required: false
close-issue-message:
description: 'The message to post on the issue when closing it. If none provided, will not comment when closing an issue.'
required: false
close-pr-message:
description: 'The message to post on the pull request when closing it. If none provided, will not comment when closing a pull requests.'
required: false
days-before-stale:
description: 'The number of days old an issue or a pull request can be before marking it stale. Set to -1 to never mark issues or pull requests as stale automatically.'
required: false
default: '60'
days-before-issue-stale:
description: 'The number of days old an issue can be before marking it stale. Set to -1 to never mark issues as stale automatically. Override "days-before-stale" option regarding only the issues.'
required: false
days-before-pr-stale:
description: 'The number of days old a pull request can be before marking it stale. Set to -1 to never mark pull requests as stale automatically. Override "days-before-stale" option regarding only the pull requests.'
required: false
days-before-close:
description: 'The number of days to wait to close an issue or a pull request after it being marked stale. Set to -1 to never close stale issues or pull requests.'
required: false
default: '7'
days-before-issue-close:
description: 'The number of days to wait to close an issue after it being marked stale. Set to -1 to never close stale issues. Override "days-before-close" option regarding only the issues.'
required: false
days-before-pr-close:
description: 'The number of days to wait to close a pull request after it being marked stale. Set to -1 to never close stale pull requests. Override "days-before-close" option regarding only the pull requests.'
required: false
stale-issue-label:
description: 'The label to apply when an issue is stale.'
required: false
default: 'Stale'
close-issue-label:
description: 'The label to apply when an issue is closed.'
required: false
exempt-issue-labels:
description: 'The labels that mean an issue is exempt from being marked stale. Separate multiple labels with commas (eg. "label1,label2").'
default: ''
required: false
close-issue-reason:
description: 'The reason to use when closing an issue.'
default: 'not_planned'
required: false
stale-pr-label:
description: 'The label to apply when a pull request is stale.'
default: 'Stale'
required: false
close-pr-label:
description: 'The label to apply when a pull request is closed.'
required: false
exempt-pr-labels:
description: 'The labels that mean a pull request is exempt from being marked as stale. Separate multiple labels with commas (eg. "label1,label2").'
default: ''
required: false
exempt-milestones:
description: 'The milestones that mean an issue or a pull request is exempt from being marked as stale. Separate multiple milestones with commas (eg. "milestone1,milestone2").'
default: ''
required: false
exempt-issue-milestones:
description: 'The milestones that mean an issue is exempt from being marked as stale. Separate multiple milestones with commas (eg. "milestone1,milestone2"). Override "exempt-milestones" option regarding only the issues.'
default: ''
required: false
exempt-pr-milestones:
description: 'The milestones that mean a pull request is exempt from being marked as stale. Separate multiple milestones with commas (eg. "milestone1,milestone2"). Override "exempt-milestones" option regarding only the pull requests.'
default: ''
required: false
exempt-all-milestones:
description: 'Exempt all issues and pull requests with milestones from being marked as stale. Default to false.'
default: 'false'
required: false
exempt-all-issue-milestones:
description: 'Exempt all issues with milestones from being marked as stale. Override "exempt-all-milestones" option regarding only the issues.'
default: ''
required: false
exempt-all-pr-milestones:
description: 'Exempt all pull requests with milestones from being marked as stale. Override "exempt-all-milestones" option regarding only the pull requests.'
default: ''
required: false
only-labels:
description: 'Only issues or pull requests with all of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels.'
default: ''
required: false
any-of-labels:
description: 'Only issues or pull requests with at least one of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels.'
default: ''
required: false
any-of-issue-labels:
description: 'Only issues with at least one of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels. Override "any-of-labels" option regarding only the issues.'
default: ''
required: false
any-of-pr-labels:
description: 'Only pull requests with at least one of these labels are checked if stale. Defaults to `` (disabled) and can be a comma-separated list of labels. Override "any-of-labels" option regarding only the pull requests.'
default: ''
required: false
only-issue-labels:
description: 'Only issues with all of these labels are checked if stale. Defaults to `[]` (disabled) and can be a comma-separated list of labels. Override "only-labels" option regarding only the issues.'
default: ''
required: false
only-pr-labels:
description: 'Only pull requests with all of these labels are checked if stale. Defaults to `[]` (disabled) and can be a comma-separated list of labels. Override "only-labels" option regarding only the pull requests.'
default: ''
required: false
operations-per-run:
description: 'The maximum number of operations per run, used to control rate limiting (GitHub API CRUD related).'
default: '30'
required: false
remove-stale-when-updated:
description: 'Remove stale labels from issues and pull requests when they are updated or commented on.'
default: 'true'
required: false
remove-issue-stale-when-updated:
description: 'Remove stale labels from issues when they are updated or commented on. Override "remove-stale-when-updated" option regarding only the issues.'
default: ''
required: false
remove-pr-stale-when-updated:
description: 'Remove stale labels from pull requests when they are updated or commented on. Override "remove-stale-when-updated" option regarding only the pull requests.'
default: ''
required: false
debug-only:
description: 'Run the processor in debug mode without actually performing any operations on live issues.'
default: 'false'
required: false
ascending:
description: 'The order to get issues or pull requests. Defaults to false, which is descending.'
default: 'false'
required: false
sort-by:
description: 'What to sort results by. Valid options are `created`, `updated`, and `comments`. Defaults to `created`.'
default: 'created'
required: false
delete-branch:
description: 'Delete the git branch after closing a stale pull request.'
default: 'false'
required: false
start-date:
description: 'The date used to skip the stale action on issue/pull request created before it (ISO 8601 or RFC 2822).'
default: ''
required: false
exempt-assignees:
description: 'The assignees which exempt an issue or a pull request from being marked as stale. Separate multiple assignees with commas (eg. "user1,user2").'
default: ''
required: false
exempt-issue-assignees:
description: 'The assignees which exempt an issue from being marked as stale. Separate multiple assignees with commas (eg. "user1,user2"). Override "exempt-assignees" option regarding only the issues.'
default: ''
required: false
exempt-pr-assignees:
description: 'The assignees which exempt a pull request from being marked as stale. Separate multiple assignees with commas (eg. "user1,user2"). Override "exempt-assignees" option regarding only the pull requests.'
default: ''
required: false
exempt-all-assignees:
description: 'Exempt all issues and pull requests with assignees from being marked as stale. Default to false.'
default: 'false'
required: false
exempt-all-issue-assignees:
description: 'Exempt all issues with assignees from being marked as stale. Override "exempt-all-assignees" option regarding only the issues.'
default: ''
required: false
exempt-all-pr-assignees:
description: 'Exempt all pull requests with assignees from being marked as stale. Override "exempt-all-assignees" option regarding only the pull requests.'
default: ''
required: false
exempt-draft-pr:
description: 'Exempt draft pull requests from being marked as stale. Default to false.'
default: 'false'
required: false
enable-statistics:
description: 'Display some statistics at the end regarding the stale workflow (only when the logs are enabled).'
default: 'true'
required: false
labels-to-add-when-unstale:
description: 'A comma delimited list of labels to add when an issue or pull request becomes unstale.'
default: ''
required: false
labels-to-remove-when-stale:
description: 'A comma delimited list of labels to remove when an issue or pull request becomes stale.'
default: ''
required: false
labels-to-remove-when-unstale:
description: 'A comma delimited list of labels to remove when an issue or pull request becomes unstale.'
default: ''
required: false
ignore-updates:
description: 'Any update (update/comment) can reset the stale idle time on the issues and pull requests.'
default: 'false'
required: false
ignore-issue-updates:
description: 'Any update (update/comment) can reset the stale idle time on the issues. Override "ignore-updates" option regarding only the issues.'
default: ''
required: false
ignore-pr-updates:
description: 'Any update (update/comment) can reset the stale idle time on the pull requests. Override "ignore-updates" option regarding only the pull requests.'
default: ''
required: false
include-only-assigned:
description: 'Only the issues or the pull requests with an assignee will be marked as stale automatically.'
default: 'false'
required: false
only-issue-types:
description: 'Only issues with a matching type are processed as stale/closed. Defaults to `[]` (disabled) and can be a comma-separated list of issue types.'
default: ''
required: false
outputs:
closed-issues-prs:
description: 'List of all closed issues and pull requests.'
staled-issues-prs:
description: 'List of all staled issues and pull requests.'
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/lukka/run-vcpkg
Author: Luca Cappa https://github.com/lukka
Publisher: lukka
Repository: github.com/lukka/run-vcpkg
Setup (and optionally run) vcpkg to build C/C++ dependencies and cache them automatically.
| Name | Required | Description |
|---|---|---|
vcpkgDirectory |
Optional | Default: ${{ github.workspace }}/vcpkg |
runVcpkgInstall |
Optional | Run the installation of packages by running `vcpkg install` on the directory of the discovered 'vcpkg.json' file. Default is false. |
vcpkgGitCommitId |
Optional | Specify the full SHA-1 hash of a Git commit (not a branch name, nor a tag!) that establishes which version of vcpkg needs to be used. When using vcpkg as a Git submodule, this input is *not* needed as implicitly specified by the submodule. When not specified and a submodule is not used for vcpkg, the commit id is being searched in vcpkg.json or vcpkg-configure.json, see vcpkgConfigurationJsonGlob input. |
vcpkgGitURL |
Optional | Specify the URL Git repository to download vcpkg from. Defaults to https://github.com/microsoft/vcpkg.git Default: https://github.com/microsoft/vcpkg.git |
doNotUpdateVcpkg |
Optional | Avoid to update vcpkg (launching git) in the specified 'vcpkgDirectory'. This is useful when vcpkg is being checkout independently of the run-vcpkg action. Default is false. |
doNotCache |
Optional | Enable the caching of the vcpkg executable and its data files (e.g. ports) by setting it to false. Default is true. Set this input to false when the vcpkg's executable is not delivered as a prebuilt file upon bootstrapping vcpkg. This does not disable vcpkg's binary cache which is always on and can be controlled by the user with the env var VCPKG_BINARY_SOURCES. Default: True |
vcpkgJsonGlob |
Optional | Specify the glob expression used to discover the vcpkg.json whose content's hash is added to the cache key. On Windows runners using `github.workspace` context to form the expression would not work as expected since it contains backslashes. Use instead `**/path/to/vcpkg.json` to match the desired `vcpkg.json` file. Default: **/vcpkg.json |
vcpkgJsonIgnores |
Optional | Specify an array of string containing the pattenrs to be ignored when searching for the vcpkg.json file. The default value ignores the pattern '**/vcpkg/**' Default: ['**/vcpkg/**'] |
vcpkgConfigurationJsonGlob |
Optional | When the vcpkgGitCommitId input is not provided by the user, this glob expression is used to locate the vcpkg-configuration.json file which contains the commit id of the builtin baseline. On Windows GitHub runners do not use `github.workspace` context expression to form the value of this input, since it contains backslashes and it will eventually fail. Use instead `**/path/to/vcpkg-configuration.json` to match the desired `vcpkg-configuration.json` file. Default: **/vcpkg-configuration.json |
binaryCachePath |
Optional | Specify a path to store the built packages to be cached with the GitHub cache service. Default is '<runnerWorkspace>/b/vcpkg_cache'. |
runVcpkgFormatString |
Optional | Specify the command line to run vcpkg with. This is only useful when the input 'runVcpkgInstall' is set to true. Default: [`install`, `--recurse`, `--clean-after-build`, `--x-install-root`, `$[env.VCPKG_INSTALLED_DIR]`, `--triplet`, `$[env.VCPKG_DEFAULT_TRIPLET]`] |
useShell |
Optional | Specify which shell to be used when launching commands. 'true' means the default shell is used. 'false' means no shell is used. It also can be an absolute path and arguments of the shell to spawn commands with. Default: True |
logCollectionRegExps |
Optional | Specifies a semicolon separated list of regular expressions that are used to identify log file paths in the workflow output. A regular expression must have a single capturing group, that is a single pair of parenthesis such as 'See also (.+.log)'. When a match occurs, the content of the file is written into the workflow output for disclosing its content to the user. The default regular expressions are for CMake's and vcpkg's log files. Default: \s*"(.+CMakeOutput\.log)"\.\s*;\s*"(.+CMakeError\.log)"\.\s*;\s*(.+out\.log)\s*;\s+(.+err\.log)\s*;\s*(.+vcpkg.+\.log)\s* |
# Copyright (c) 2019-2020-2021-2022-2023 Luca Cappa
# Released under the term specified in file LICENSE.txt
# SPDX short identifier: MIT
name: 'run-vcpkg'
description: 'Setup (and optionally run) vcpkg to build C/C++ dependencies and cache them automatically.'
author: 'Luca Cappa https://github.com/lukka'
inputs:
# The following 'inputs' are commonly defined by the user in the workflow.
vcpkgDirectory:
default: ${{ github.workspace }}/vcpkg
required: false
descriptions: "Specify the vcpkg's root directory. If not specified, it defaults to <github.workspace>/vcpkg/ . When vcpkg is a Git submodule, specify the path to it."
runVcpkgInstall:
default: false
required: false
description: "Run the installation of packages by running `vcpkg install` on the directory of the discovered 'vcpkg.json' file. Default is false."
vcpkgGitCommitId:
required: false
description: "Specify the full SHA-1 hash of a Git commit (not a branch name, nor a tag!) that establishes which version of vcpkg needs to be used. When using vcpkg as a Git submodule, this input is *not* needed as implicitly specified by the submodule. When not specified and a submodule is not used for vcpkg, the commit id is being searched in vcpkg.json or vcpkg-configure.json, see vcpkgConfigurationJsonGlob input."
vcpkgGitURL:
default: "https://github.com/microsoft/vcpkg.git"
required: false
description: "Specify the URL Git repository to download vcpkg from. Defaults to https://github.com/microsoft/vcpkg.git"
doNotUpdateVcpkg:
default: false
required: false
description: "Avoid to update vcpkg (launching git) in the specified 'vcpkgDirectory'. This is useful when vcpkg is being checkout independently of the run-vcpkg action. Default is false."
doNotCache:
default: true
required: false
description: "Enable the caching of the vcpkg executable and its data files (e.g. ports) by setting it to false. Default is true. Set this input to false when the vcpkg's executable is not delivered as a prebuilt file upon bootstrapping vcpkg. This does not disable vcpkg's binary cache which is always on and can be controlled by the user with the env var VCPKG_BINARY_SOURCES."
# The following inputs are rarely set by the user, since the default values suffice the most common scenarios.
vcpkgJsonGlob:
default: '**/vcpkg.json'
required: false
description: "Specify the glob expression used to discover the vcpkg.json whose content's hash is added to the cache key. On Windows runners using `github.workspace` context to form the expression would not work as expected since it contains backslashes. Use instead `**/path/to/vcpkg.json` to match the desired `vcpkg.json` file."
vcpkgJsonIgnores:
default: "['**/vcpkg/**']"
required: false
description: "Specify an array of string containing the pattenrs to be ignored when searching for the vcpkg.json file. The default value ignores the pattern '**/vcpkg/**'"
vcpkgConfigurationJsonGlob:
default: '**/vcpkg-configuration.json'
required: false
description: "When the vcpkgGitCommitId input is not provided by the user, this glob expression is used to locate the vcpkg-configuration.json file which contains the commit id of the builtin baseline. On Windows GitHub runners do not use `github.workspace` context expression to form the value of this input, since it contains backslashes and it will eventually fail. Use instead `**/path/to/vcpkg-configuration.json` to match the desired `vcpkg-configuration.json` file."
binaryCachePath:
required: false
description: "Specify a path to store the built packages to be cached with the GitHub cache service. Default is '<runnerWorkspace>/b/vcpkg_cache'."
runVcpkgFormatString:
default: '[`install`, `--recurse`, `--clean-after-build`, `--x-install-root`, `$[env.VCPKG_INSTALLED_DIR]`, `--triplet`, `$[env.VCPKG_DEFAULT_TRIPLET]`]'
required: false
description: "Specify the command line to run vcpkg with. This is only useful when the input 'runVcpkgInstall' is set to true."
useShell:
default: true
required: false
description: "Specify which shell to be used when launching commands. 'true' means the default shell is used. 'false' means no shell is used. It also can be an absolute path and arguments of the shell to spawn commands with."
logCollectionRegExps:
default: "\\s*\"(.+CMakeOutput\\.log)\"\\.\\s*;\\s*\"(.+CMakeError\\.log)\"\\.\\s*;\\s*(.+out\\.log)\\s*;\\s+(.+err\\.log)\\s*;\\s*(.+vcpkg.+\\.log)\\s*"
required: false
description: "Specifies a semicolon separated list of regular expressions that are used to identify log file paths in the workflow output. A regular expression must have a single capturing group, that is a single pair of parenthesis such as 'See also (.+.log)'. When a match occurs, the content of the file is written into the workflow output for disclosing its content to the user. The default regular expressions are for CMake's and vcpkg's log files."
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'terminal'
color: 'green'
Action ID: marketplace/aws-actions/cloudformation-aws-iam-policy-validator
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/cloudformation-aws-iam-policy-validator
Validate IAM Policies in CFN templates using ValidatePolicy, CheckAccessNotGranted & CheckNoNewAccess API in Access Analyzer
| Name | Required | Description |
|---|---|---|
policy-check-type |
Required | Type of the policy check. Valid values: VALIDATE_POLICY, CHECK_NO_NEW_ACCESS, CHECK_ACCESS_NOT_GRANTED |
template-path |
Required | The path to the CloudFormation template. |
region |
Required | The destination region the resources will be deployed to. |
parameters |
Optional | Keys and values for CloudFormation template parameters. Only parameters that are referenced by IAM policies in the template are required. Example format - KEY=VALUE [KEY=VALUE ...] |
template-configuration-file |
Optional | A JSON formatted file that specifies template parameter values, a stack policy, and tags. Only parameters are used from this file. Everything else is ignored. Identical values passed in the --parameters flag override parameters in this file. See CloudFormation documentation for file format: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-cfn-artifacts.html |
ignore-finding |
Optional | Allow validation failures to be ignored. Specify as a comma separated list of findings to be ignored. Can be individual finding codes (e.g. "PASS_ROLE_WITH_STAR_IN_RESOURCE"), a specific resource name (e.g. "MyResource"), or a combination of both separated by a period.(e.g. "MyResource.PASS_ROLE_WITH_STAR_IN_RESOURCE"). Names of finding codes may change in IAM Access Analyzer over time. Valid options: FINDING_CODE,RESOURCE_NAME,RESOURCE_NAME.FINDING_CODE |
actions |
Optional | List of comma-separated actions. Example format - ACTION,ACTION,ACTION. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED". At least one of "actions" or "resources" must be specified. |
resources |
Optional | List of comma-separated resource ARNs. Example format - RESOURCE,RESOURCE,RESOURCE. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED" At least one of "actions" or "resources" must be specified. |
reference-policy |
Optional | A JSON formatted file that specifies the path to the reference policy that is used for a permissions comparison. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS" |
reference-policy-type |
Optional | The policy type associated with the IAM policy under analysis and the reference policy. Valid values: IDENTITY, RESOURCE. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS" |
treat-finding-type-as-blocking |
Optional | Specify which finding types should be treated as blocking. Other finding types are treated as non blocking. If the tool detects any blocking finding types, it will exit with a non-zero exit code. If all findings are non blocking or there are no findings, the tool exits with an exit code of 0. Defaults to "ERROR" and "SECURITY_WARNING". Specify as a comma separated list of finding types that should be blocking. Pass "NONE" to ignore all findings. This attribute is considered only when policy-check-type is "VALIDATE_POLICY" |
treat-findings-as-non-blocking |
Optional | When not specified, the tool detects any findings, it will exit with a non-zero exit code. When specified, the tool exits with an exit code of 0. This attribute is considered only when policy-check-type is "CHECK_NO_NEW_ACCESS" or "CHECK_ACCESS_NOT_GRANTED" Default: False |
allow-external-principals |
Optional | A comma separated list of external principals that should be ignored. Specify as a comma separated list of a 12 digit AWS account ID, a federated web identity user, a federated SAML user, or an ARN. Specify \"*\" to allow anonymous access. (e.g. 123456789123,arn:aws:iam::111111111111:role/MyOtherRole,graph.facebook.com). Valid options: ACCOUNT,ARN". This attribute is considered only when policy-check-type is "VALIDATE_POLICY" |
allow-dynamic-ref-without-version |
Optional | Override the default behavior and allow dynamic SSM references without version numbers. The version number ensures that the SSM parameter value that was validated is the one that is deployed. |
exclude-resource-types |
Optional | List of comma-separated resource types. Resource types should be the same as Cloudformation template resource names such as AWS::IAM::Role, AWS::S3::Bucket. Valid option syntax: AWS::SERVICE::RESOURCE |
| Name | Description |
|---|---|
result |
Result of the policy checks |
name: 'Policy checks to validate AWS IAM policies in CloudFormation templates" Action For GitHub Actions'
description: "Validate IAM Policies in CFN templates using ValidatePolicy, CheckAccessNotGranted & CheckNoNewAccess API in Access Analyzer"
branding:
icon: "cloud"
color: "orange"
inputs:
policy-check-type:
description: "Type of the policy check. Valid values: VALIDATE_POLICY, CHECK_NO_NEW_ACCESS, CHECK_ACCESS_NOT_GRANTED"
required: true
template-path:
description: "The path to the CloudFormation template."
required: true
region:
description: "The destination region the resources will be deployed to."
required: true
parameters:
description: "Keys and values for CloudFormation template parameters. Only parameters that are referenced by IAM policies in the template are required. Example format - KEY=VALUE [KEY=VALUE ...]"
template-configuration-file:
description: "A JSON formatted file that specifies template parameter values, a stack policy, and tags. Only parameters are used from this file. Everything else is ignored. Identical values passed in the --parameters flag override parameters in this file. See CloudFormation documentation for file format: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-cfn-artifacts.html"
ignore-finding:
description: 'Allow validation failures to be ignored. Specify as a comma separated list of findings to be ignored. Can be individual finding codes (e.g. "PASS_ROLE_WITH_STAR_IN_RESOURCE"), a specific resource name (e.g. "MyResource"), or a combination of both separated by a period.(e.g. "MyResource.PASS_ROLE_WITH_STAR_IN_RESOURCE"). Names of finding codes may change in IAM Access Analyzer over time. Valid options: FINDING_CODE,RESOURCE_NAME,RESOURCE_NAME.FINDING_CODE'
actions:
description: 'List of comma-separated actions. Example format - ACTION,ACTION,ACTION. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED". At least one of "actions" or "resources" must be specified.'
resources:
description: 'List of comma-separated resource ARNs. Example format - RESOURCE,RESOURCE,RESOURCE. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED" At least one of "actions" or "resources" must be specified.'
reference-policy:
description: 'A JSON formatted file that specifies the path to the reference policy that is used for a permissions comparison. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS"'
reference-policy-type:
description: 'The policy type associated with the IAM policy under analysis and the reference policy. Valid values: IDENTITY, RESOURCE. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS"'
treat-finding-type-as-blocking:
description: 'Specify which finding types should be treated as blocking. Other finding types are treated as non blocking. If the tool detects any blocking finding types, it will exit with a non-zero exit code. If all findings are non blocking or there are no findings, the tool exits with an exit code of 0. Defaults to "ERROR" and "SECURITY_WARNING". Specify as a comma separated list of finding types that should be blocking. Pass "NONE" to ignore all findings. This attribute is considered only when policy-check-type is "VALIDATE_POLICY"'
treat-findings-as-non-blocking:
description: 'When not specified, the tool detects any findings, it will exit with a non-zero exit code. When specified, the tool exits with an exit code of 0. This attribute is considered only when policy-check-type is "CHECK_NO_NEW_ACCESS" or "CHECK_ACCESS_NOT_GRANTED"'
default: "False"
allow-external-principals:
description: 'A comma separated list of external principals that should be ignored. Specify as a comma separated list of a 12 digit AWS account ID, a federated web identity user, a federated SAML user, or an ARN. Specify \"*\" to allow anonymous access. (e.g. 123456789123,arn:aws:iam::111111111111:role/MyOtherRole,graph.facebook.com). Valid options: ACCOUNT,ARN". This attribute is considered only when policy-check-type is "VALIDATE_POLICY"'
allow-dynamic-ref-without-version:
description: "Override the default behavior and allow dynamic SSM references without version numbers. The version number ensures that the SSM parameter value that was validated is the one that is deployed."
exclude-resource-types:
description: "List of comma-separated resource types. Resource types should be the same as Cloudformation template resource names such as AWS::IAM::Role, AWS::S3::Bucket. Valid option syntax: AWS::SERVICE::RESOURCE"
outputs:
result:
description: "Result of the policy checks"
runs:
using: "docker"
image: Dockerfile
args:
- ${{ inputs.policy-check-type}}
- ${{ inputs.template-path }}
- ${{ inputs.region }}
- ${{ inputs.parameters }}
- ${{ inputs.template-configuration-file }}
- ${{ inputs.ignore-finding }}
- ${{ inputs.actions }}
- ${{ inputs.reference-policy }}
- ${{ inputs.reference-policy-type }}
- ${{ inputs.treat-finding-type-as-blocking }}
- ${{ inputs.treat-findings-as-non-blocking }}
- ${{ inputs.allow-external-principals }}
- ${{ inputs.allow-dynamic-ref-without-version }}
- ${{ inputs.exclude-resource-types }}
Action ID: marketplace/aws-actions/sustainability-scanner
Author: AWS Sustainability
Publisher: aws-actions
Repository: github.com/aws-actions/sustainability-scanner
Run AWS Sustainability Scanner against infrastructure as code as a pre-packaged GitHub Action
| Name | Required | Description |
|---|---|---|
file |
Optional | File path of the CloudFormation template to be scanned |
directory |
Optional | Directory path with CloudFormation files to be scanned Default: . |
stack_name |
Optional | CDK stack name to be scanned |
rules_file |
Optional | File path to extend set of rules |
| Name | Description |
|---|---|
results |
Results from the sustainability scan |
name: AWS Sustainability Scanner GitHub Action
author: AWS Sustainability
description: Run AWS Sustainability Scanner against infrastructure as code as a pre-packaged GitHub Action
branding:
icon: check-circle
color: green
inputs:
file:
description: File path of the CloudFormation template to be scanned
required: false
directory:
description: Directory path with CloudFormation files to be scanned
required: false
default: '.'
stack_name:
description: CDK stack name to be scanned
required: false
rules_file:
description: File path to extend set of rules
required: false
outputs:
results:
description: Results from the sustainability scan
runs:
using: docker
image: Dockerfile
args:
- ${{ inputs.file }}
- ${{ inputs.directory }}
- ${{ inputs.stack_name }}
- ${{ inputs.rules_file }}
Action ID: marketplace/repo-sync/github-sync
Author: Wei He <github@weispot.com>
Publisher: repo-sync
Repository: github.com/repo-sync/github-sync
⤵️ Sync current repository with remote
| Name | Required | Description |
|---|---|---|
source_repo |
Required | GitHub public repo slug or full https clone url (with access_token if needed) |
source_branch |
Required | Branch name to sync from |
destination_branch |
Required | Branch name to sync to in this repo |
github_token |
Required | GitHub token secret |
sync_tags |
Optional | Should tags also be synced |
name: GitHub Repo Sync
author: Wei He <github@weispot.com>
description: ⤵️ Sync current repository with remote
branding:
icon: 'git-branch'
color: 'gray-dark'
inputs:
source_repo:
description: GitHub public repo slug or full https clone url (with access_token if needed)
required: true
source_branch:
description: Branch name to sync from
required: true
destination_branch:
description: Branch name to sync to in this repo
required: true
github_token:
description: GitHub token secret
required: true
sync_tags:
description: Should tags also be synced
required: false
runs:
using: 'docker'
image: docker://ghcr.io/repo-sync/github-sync:v2.3.0
env:
GITHUB_TOKEN: ${{ inputs.github_token }}
SYNC_TAGS: ${{ inputs.sync_tags }}
args:
- ${{ inputs.source_repo }}
- ${{ inputs.source_branch }}:${{ inputs.destination_branch }}
Action ID: marketplace/szenius/set-timezone
Author: szenius
Publisher: szenius
Repository: github.com/szenius/set-timezone
Sets timezone in your locale
| Name | Required | Description |
|---|---|---|
timezoneLinux |
Optional | Desired timezone for Linux Default: UTC |
timezoneMacos |
Optional | Desired timezone for MacOS Default: GMT |
timezoneWindows |
Optional | Desired timezone for Windows Default: UTC |
name: "Set Timezone"
description: "Sets timezone in your locale"
author: "szenius"
branding:
icon: "clock"
color: "yellow"
inputs:
timezoneLinux:
description: "Desired timezone for Linux"
required: false
default: "UTC"
timezoneMacos:
description: "Desired timezone for MacOS"
required: false
default: "GMT"
timezoneWindows:
description: "Desired timezone for Windows"
required: false
default: "UTC"
runs:
using: "node20"
main: "dist/index.mjs"
Action ID: marketplace/andstor/copycat-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/copycat-action
GitHub Action for copying files to other repositories
| Name | Required | Description |
|---|---|---|
personal_token |
Required | Personal access token |
src_path |
Required | The source path to the file(s) or folder(s) to copy from |
dst_path |
Optional | The destination path to copy the file(s) or folder(s) to |
dst_owner |
Required | The name of the owner of the repository to push to |
dst_repo_name |
Required | The name of the repository to push to |
src_branch |
Optional | The branch name of the source repository Default: master |
dst_branch |
Optional | The branch name of the destination repository Default: master |
clean |
Optional | Set to true if the dst_path should be emptied before copying |
file_filter |
Optional | A glob pattern for filtering file names |
filter |
Optional | A glob pattern for filtering file paths to be included for copying |
exclude |
Optional | A glob pattern for excluding paths |
src_wiki |
Optional | Set to true if the source repository you want to copy from is the GitHub Wiki |
dst_wiki |
Optional | Set to true if the destination repository you want to copy from is the GitHub Wiki |
commit_message |
Optional | A custom git commit message. |
username |
Optional | The GitHub username to associate commits made by this GitHub action |
email |
Optional | The email used for associating commits made by this GitHub action |
name: 'Copycat Action'
description: 'GitHub Action for copying files to other repositories'
author: 'André Storhaug'
branding:
icon: 'copy'
color: 'red'
inputs:
personal_token:
description: 'Personal access token'
required: true
src_path:
description: 'The source path to the file(s) or folder(s) to copy from'
required: true
dst_path:
description: 'The destination path to copy the file(s) or folder(s) to'
required: false
dst_owner:
description: 'The name of the owner of the repository to push to'
required: true
dst_repo_name:
description: 'The name of the repository to push to'
required: true
src_branch:
description: 'The branch name of the source repository'
required: false
default: 'master'
dst_branch:
description: 'The branch name of the destination repository'
required: false
default: 'master'
clean:
description: 'Set to true if the dst_path should be emptied before copying'
default: false
required: false
file_filter:
description: 'A glob pattern for filtering file names'
required: false
filter:
description: 'A glob pattern for filtering file paths to be included for copying'
required: false
exclude:
description: 'A glob pattern for excluding paths'
required: false
src_wiki:
description: 'Set to true if the source repository you want to copy from is the GitHub Wiki'
default: false
required: false
dst_wiki:
description: 'Set to true if the destination repository you want to copy from is the GitHub Wiki'
required: false
default: false
commit_message:
description: 'A custom git commit message.'
required: false
username:
description: 'The GitHub username to associate commits made by this GitHub action'
required: false
email:
description: 'The email used for associating commits made by this GitHub action'
required: false
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.personal_token }}
- ${{ inputs.src_path }}
- ${{ inputs.dst_path }}
- ${{ inputs.dst_owner }}
- ${{ inputs.dst_repo_name }}
- ${{ inputs.src_branch }}
- ${{ inputs.dst_branch }}
- ${{ inputs.clean }}
- ${{ inputs.file_filter }}
- ${{ inputs.filter }}
- ${{ inputs.exclude }}
- ${{ inputs.src_wiki }}
- ${{ inputs.dst_wiki }}
- ${{ inputs.commit_message }}
- ${{ inputs.username }}
- ${{ inputs.email }}
Action ID: marketplace/azure/deployment-what-if-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/deployment-what-if-action
Preview Azure infrastructure changes before its deployment
| Name | Required | Description |
|---|---|---|
subscription |
Required | Subscription ID |
resourceGroup |
Required | Resource group name |
templateFile |
Required | ARM template or Bicep file |
parametersFile |
Optional | Parameters file for the ARM template or Bicep |
additionalParameters |
Optional | Additional parameters to be applied on the ARM template or Bicep. Multiple parameters should be separated by spaces. |
name: deployment-what-if
description: Preview Azure infrastructure changes before its deployment
inputs:
subscription:
description: Subscription ID
required: true
resourceGroup:
description: Resource group name
required: true
templateFile:
description: ARM template or Bicep file
required: true
parametersFile:
description: Parameters file for the ARM template or Bicep
required: false
additionalParameters:
description: Additional parameters to be applied on the ARM template or Bicep. Multiple parameters should be separated by spaces.
required: false
runs:
using: composite
steps:
- run: |
PARAMETERS=""
if [[ ! -z "${{ inputs.parametersFile }}" || ! -z "${{ inputs.additionalParameters }}" ]]; then
PARAMETERS="--parameters"
if [[ ! -z "${{ inputs.parametersFile }}" ]]; then
PARAMETERS="$PARAMETERS @${{ inputs.parametersFile }}"
fi
if [[ ! -z "${{ inputs.additionalParameters }}" ]]; then
PARAMETERS="$PARAMETERS ${{ inputs.additionalParameters }}"
fi
fi
az deployment group what-if \
--resource-group ${{ inputs.resourceGroup }} \
--subscription ${{ inputs.subscription }} \
--template-file ${{ inputs.templateFile }} \
$PARAMETERS
shell: bash
Action ID: marketplace/github/update-project-action
Author: Unknown
Publisher: github
Repository: github.com/github/update-project-action
Updates an item's fields on a GitHub Projects (beta) board based on a workflow dispatch (or other) event's input.
| Name | Required | Description |
|---|---|---|
organization |
Optional | The organization that contains the project, defaults to the current repository owner Default: ${{ github.repository_owner }} |
project_number |
Required | The project number from the project's URL |
operation |
Optional | Operation type (update, read, or clear) Default: update |
content_id |
Required | The global ID of the issue or pull request within the project |
field |
Required | The field on the project to set the value of |
value |
Optional | The value to set the project field to. Only required for operation type read |
github_token |
Required | A GitHub Token with access to both the source issue and the destination project (`repo` and `write:org` scopes) |
| Name | Description |
|---|---|
project_id |
The global ID of the project |
item_id |
The global ID of the issue or pull request |
item_title |
The title of the issue or pull request |
field_id |
The global ID of the field |
field_read_value |
The value of the field before the update |
field_updated_value |
The value of the field after the update |
field_type |
The updated field's ProjectV2FieldType (text, single_select, number, date, or iteration) |
option_id |
The global ID of the selected option |
name: Update project
description: Updates an item's fields on a GitHub Projects (beta) board based on a workflow dispatch (or other) event's input.
inputs:
organization:
description: The organization that contains the project, defaults to the current repository owner
required: false
default: ${{ github.repository_owner }}
project_number:
description: The project number from the project's URL
required: true
operation:
description: Operation type (update, read, or clear)
default: update
required: false
content_id:
description: The global ID of the issue or pull request within the project
required: true
field:
description: The field on the project to set the value of
required: true
value:
description: The value to set the project field to. Only required for operation type read
required: false
github_token:
description: A GitHub Token with access to both the source issue and the destination project (`repo` and `write:org` scopes)
required: true
outputs:
project_id:
description: "The global ID of the project"
value: ${{ steps.parse_project_metadata.outputs.project_id }}
item_id:
description: "The global ID of the issue or pull request"
value: ${{ steps.parse_project_metadata.outputs.item_id }}
item_title:
description: "The title of the issue or pull request"
value: ${{ steps.parse_project_metadata.outputs.item_title }}
field_id:
description: "The global ID of the field"
value: ${{ steps.parse_project_metadata.outputs.field_id }}
field_read_value:
description: "The value of the field before the update"
value: ${{ steps.parse_content_metadata.outputs.item_value }}
field_updated_value:
description: "The value of the field after the update"
value: ${{ steps.output_values.outputs.field_updated_value }}
field_type:
description: "The updated field's ProjectV2FieldType (text, single_select, number, date, or iteration)"
value: ${{ steps.parse_project_metadata.outputs.field_type }}
option_id:
description: "The global ID of the selected option"
value: ${{ steps.parse_project_metadata.outputs.option_id }}
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/lowlighter/deployctl
Author: Deno Land Inc
Publisher: lowlighter
Repository: github.com/lowlighter/deployctl
Deploy your applications to Deno Deploy, right from GitHub Actions
| Name | Required | Description |
|---|---|---|
deno-config |
Optional | The path to a deno config file that will populate other inputs (optional, deno.json and deno.jsonc are auto-discovered) |
project |
Optional | The name or ID of the project to deploy (required) |
entrypoint |
Optional | The path or URL to the entrypoint file (required) |
import-map |
Optional | The path or URL to an import map file |
include |
Optional | Only upload files that match this pattern (multiline and/or comma-separated) |
exclude |
Optional | Exclude files that match this pattern (multiline and/or comma-separated) |
root |
Optional | The path to the directory containing the code and assets to upload |
import-map-autogen-temp |
Optional | The path to store the auto-generated import map if deno config file is present (relative to root) Default: .importMap.generated.json |
| Name | Description |
|---|---|
deployment-id |
The ID of the created deployment |
url |
The URL where the deployment is reachable |
name: Deploy to Deno Deploy
description: Deploy your applications to Deno Deploy, right from GitHub Actions
author: Deno Land Inc
branding:
color: gray-dark
icon: globe
inputs:
deno-config:
description: The path to a deno config file that will populate other inputs (optional, deno.json and deno.jsonc are auto-discovered)
required: false
project:
description: The name or ID of the project to deploy (required)
required: false
entrypoint:
description: The path or URL to the entrypoint file (required)
required: false
import-map:
description: The path or URL to an import map file
required: false
include:
description: Only upload files that match this pattern (multiline and/or comma-separated)
required: false
exclude:
description: Exclude files that match this pattern (multiline and/or comma-separated)
required: false
root:
description: The path to the directory containing the code and assets to upload
required: false
import-map-autogen-temp:
description: The path to store the auto-generated import map if deno config file is present (relative to root)
required: false
default: ".importMap.generated.json"
outputs:
deployment-id:
description: The ID of the created deployment
url:
description: The URL where the deployment is reachable
runs:
using: node20
main: action/index.js
Action ID: marketplace/azure/k8s-artifact-substitute
Author: Unknown
Publisher: azure
Repository: github.com/azure/k8s-artifact-substitute
Swap images/digests in Kubernetes manifest files
| Name | Required | Description |
|---|---|---|
manifests |
Required | Path to the manifest files which will be used for deployment. |
images |
Required | Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files Example: contosodemo.azurecr.io/helloworld:test |
| Name | Description |
|---|---|
manifests |
Paths to the new manifests as a multiline string. Each manifest path is on a new line. |
directory |
Path to the new manifests directory. All new manifests are in the root-level. |
name: "Kubernetes Artifact Substitue"
description: "Swap images/digests in Kubernetes manifest files"
inputs:
manifests:
description: "Path to the manifest files which will be used for deployment."
required: true
images:
description: "Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files Example: contosodemo.azurecr.io/helloworld:test"
required: true
outputs:
manifests:
description: Paths to the new manifests as a multiline string. Each manifest path is on a new line.
directory:
description: Path to the new manifests directory. All new manifests are in the root-level.
branding:
color: "green"
runs:
using: "node20"
main: "lib/index.js"
Action ID: marketplace/amirisback/android-with-flutter-submodule
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-with-flutter-submodule
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/docker/metadata-action
Author: docker
Publisher: docker
Repository: github.com/docker/metadata-action
GitHub Action to extract metadata (tags, labels) for Docker
| Name | Required | Description |
|---|---|---|
context |
Required | Where to get context data. Allowed options are "workflow" (default), "git". Default: workflow |
images |
Optional | List of Docker images to use as base name for tags |
tags |
Optional | List of tags as key-value pair attributes |
flavor |
Optional | Flavors to apply |
labels |
Optional | List of custom labels |
annotations |
Optional | List of custom annotations |
sep-tags |
Optional | Separator to use for tags output (default \n) |
sep-labels |
Optional | Separator to use for labels output (default \n) |
sep-annotations |
Optional | Separator to use for annotations output (default \n) |
bake-target |
Optional | Bake target name (default docker-metadata-action) |
github-token |
Required | GitHub Token as provided by secrets Default: ${{ github.token }} |
| Name | Description |
|---|---|
version |
Generated Docker image version |
tags |
Generated Docker tags |
tag-names |
Generated Docker tag names without image base name |
labels |
Generated Docker labels |
annotations |
Generated annotations |
json |
JSON output of tags and labels |
bake-file-tags |
Bake definition file with tags |
bake-file-labels |
Bake definition file with labels |
bake-file-annotations |
Bake definition file with annotations |
bake-file |
Bake definition file with tags and labels |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Docker Metadata action'
description: "GitHub Action to extract metadata (tags, labels) for Docker"
author: 'docker'
branding:
icon: 'anchor'
color: 'blue'
inputs:
context:
description: 'Where to get context data. Allowed options are "workflow" (default), "git".'
default: "workflow"
required: true
images:
description: 'List of Docker images to use as base name for tags'
required: false
tags:
description: 'List of tags as key-value pair attributes'
required: false
flavor:
description: 'Flavors to apply'
required: false
labels:
description: 'List of custom labels'
required: false
annotations:
description: 'List of custom annotations'
required: false
sep-tags:
description: 'Separator to use for tags output (default \n)'
required: false
sep-labels:
description: 'Separator to use for labels output (default \n)'
required: false
sep-annotations:
description: 'Separator to use for annotations output (default \n)'
required: false
bake-target:
description: 'Bake target name (default docker-metadata-action)'
required: false
github-token:
description: 'GitHub Token as provided by secrets'
default: ${{ github.token }}
required: true
outputs:
version:
description: 'Generated Docker image version'
tags:
description: 'Generated Docker tags'
tag-names:
description: 'Generated Docker tag names without image base name'
labels:
description: 'Generated Docker labels'
annotations:
description: 'Generated annotations'
json:
description: 'JSON output of tags and labels'
bake-file-tags:
description: 'Bake definition file with tags'
bake-file-labels:
description: 'Bake definition file with labels'
bake-file-annotations:
description: 'Bake definition file with annotations'
bake-file:
description: 'Bake definition file with tags and labels'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/actions/setup-elixir
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-elixir
Set up a specific version of OTP and Elixir and add the command-line tools to the PATH
| Name | Required | Description |
|---|---|---|
elixir-version |
Optional | Version range or exact version of Elixir to use |
otp-version |
Optional | Version range or exact version of OTP to use |
experimental-otp |
Optional | Whether to use experimental builds of OTP (images that have not yet been fully tested include ubuntu-16.04, ubuntu-18.04, ubuntu-20.04) |
install-hex |
Optional | Whether to install Hex Default: True |
install-rebar |
Optional | Whether to install Rebar Default: True |
| Name | Description |
|---|---|
elixir-version |
Exact version of Elixir that was installed |
otp-version |
Exact version of OTP that was installed |
name: Setup Elixir
description: Set up a specific version of OTP and Elixir and add the command-line tools to the PATH
author: GitHub
branding:
color: blue
icon: code
inputs:
elixir-version:
description: Version range or exact version of Elixir to use
otp-version:
description: Version range or exact version of OTP to use
experimental-otp:
description: Whether to use experimental builds of OTP (images that have not yet been fully tested include ubuntu-16.04, ubuntu-18.04, ubuntu-20.04)
default: false
install-hex:
description: Whether to install Hex
default: true
install-rebar:
description: Whether to install Rebar
default: true
outputs:
elixir-version:
description: Exact version of Elixir that was installed
otp-version:
description: Exact version of OTP that was installed
runs:
using: node12
main: dist/index.js
Action ID: marketplace/dflook/terraform-fmt
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-fmt
Rewrite Terraform files into canonical format
| Name | Required | Description |
|---|---|---|
path |
Optional | The path containing Terraform files to format. Default: . |
workspace |
Optional | Terraform workspace to inspect when discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified. See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified. See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details. Paths should be relative to the GitHub Actions workspace |
name: terraform-fmt
description: Rewrite Terraform files into canonical format
author: Daniel Flook
inputs:
path:
description: The path containing Terraform files to format.
required: false
default: "."
workspace:
description: |
Terraform workspace to inspect when discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: "default"
backend_config:
description: |
List of Terraform backend config values, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/fmt.sh
branding:
icon: globe
color: purple
Action ID: marketplace/andstor/file-reader-action
Author: André Storhaug
Publisher: andstor
Repository: github.com/andstor/file-reader-action
Read the contents of a file
| Name | Required | Description |
|---|---|---|
path |
Required | The path to the file to be read. |
encoding |
Optional | The encoding of the file. Default: utf8 |
| Name | Description |
|---|---|
contents |
The file contents. |
name: 'File Reader'
description: 'Read the contents of a file'
author: 'André Storhaug'
branding:
icon: 'file-text'
color: 'blue'
inputs:
path:
description: 'The path to the file to be read.'
required: true
encoding:
description: 'The encoding of the file.'
default: 'utf8'
required: false
outputs:
contents:
description: 'The file contents.'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/dineshsonachalam/git-auto-commit-action
Author: Stefan Zweifel <hello@stefanzweifel.io>
Publisher: dineshsonachalam
Repository: github.com/dineshsonachalam/git-auto-commit-action
Automatically commits files which have been changed during the workflow run and push changes back to remote repository.
| Name | Required | Description |
|---|---|---|
commit_message |
Optional | Commit message Default: Apply automatic changes |
branch |
Optional | Git branch name, where changes should be pushed too. Required if Action is used on the `pull_request` event Default: ${{ github.head_ref }} |
commit_options |
Optional | Commit options (eg. --no-verify) |
add_options |
Optional | Add options (eg. -u) |
status_options |
Optional | Status options (eg. --untracked-files=no) |
file_pattern |
Optional | File pattern used for `git add`. For example `src/\*.js` Default: . |
repository |
Optional | Local file path to the git repository. Defaults to the current directory (`.`) Default: . |
commit_user_name |
Optional | Name used for the commit user Default: GitHub Actions |
commit_user_email |
Optional | Email address used for the commit user Default: actions@github.com |
commit_author |
Optional | Value used for the commit author. Defaults to the username of whoever triggered this workflow run. Default: ${{ github.actor }} <${{ github.actor }}@users.noreply.github.com> |
tagging_message |
Optional | Message used to create a new git tag with the commit. Keep this empty, if no tag should be created. |
push_options |
Optional | Push options (eg. --force) |
skip_dirty_check |
Optional | Skip the check if the git repository is dirty and always try to create a commit. |
skip_fetch |
Optional | Skip the call to git-fetch. |
disable_globbing |
Optional | Stop the shell from expanding filenames (https://www.gnu.org/software/bash/manual/html_node/Filename-Expansion.html) |
| Name | Description |
|---|---|
changes_detected |
Value is "true", if the repository was dirty and file changes have been detected. Value is "false", if no changes have been detected. |
name: Git Auto Commit
description: 'Automatically commits files which have been changed during the workflow run and push changes back to remote repository.'
author: Stefan Zweifel <hello@stefanzweifel.io>
inputs:
commit_message:
description: Commit message
required: false
default: Apply automatic changes
branch:
description: Git branch name, where changes should be pushed too. Required if Action is used on the `pull_request` event
required: false
default: ${{ github.head_ref }}
commit_options:
description: Commit options (eg. --no-verify)
required: false
default: ''
add_options:
description: Add options (eg. -u)
required: false
default: ''
status_options:
description: Status options (eg. --untracked-files=no)
required: false
default: ''
file_pattern:
description: File pattern used for `git add`. For example `src/\*.js`
required: false
default: '.'
repository:
description: Local file path to the git repository. Defaults to the current directory (`.`)
required: false
default: '.'
commit_user_name:
description: Name used for the commit user
required: false
default: GitHub Actions
commit_user_email:
description: Email address used for the commit user
required: false
default: actions@github.com
commit_author:
description: Value used for the commit author. Defaults to the username of whoever triggered this workflow run.
required: false
default: ${{ github.actor }} <${{ github.actor }}@users.noreply.github.com>
tagging_message:
description: Message used to create a new git tag with the commit. Keep this empty, if no tag should be created.
required: false
default: ''
push_options:
description: Push options (eg. --force)
required: false
default: ''
skip_dirty_check:
description: Skip the check if the git repository is dirty and always try to create a commit.
required: false
default: false
skip_fetch:
description: Skip the call to git-fetch.
required: false
default: false
disable_globbing:
description: Stop the shell from expanding filenames (https://www.gnu.org/software/bash/manual/html_node/Filename-Expansion.html)
default: false
outputs:
changes_detected:
description: Value is "true", if the repository was dirty and file changes have been detected. Value is "false", if no changes have been detected.
runs:
using: 'node12'
main: 'index.js'
branding:
icon: 'git-commit'
color: orange
Action ID: marketplace/mheap/raise-pr-on-change-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/raise-pr-on-change-action
Automatically raise a PR to a downstream repo whenever a file changes
| Name | Required | Description |
|---|---|---|
token |
Required | A GitHub API token that has permission to list org membership |
mode |
Optional | Should we rely on the PR files changes, or check the contents in the upstream repo (default: check-upstream) |
configFile |
Optional | A file containing downstream repos and any file paths that need updating Default: .github/raise-pr-on-change.json |
prBranch |
Optional | The branch to push changes to before submitting a PR |
targetBranch |
Optional | The branch to target when opening a PR |
prTitle |
Required | The PR title to use |
prBody |
Required | The PR body to use |
commitMessage |
Optional | The commit message use |
name: Raise PR on change
description: Automatically raise a PR to a downstream repo whenever a file changes
runs:
using: docker
image: Dockerfile
branding:
icon: slash
color: orange
inputs:
token:
description: "A GitHub API token that has permission to list org membership"
required: true
mode:
description: "Should we rely on the PR files changes, or check the contents in the upstream repo (default: check-upstream)"
required: false
configFile:
description: "A file containing downstream repos and any file paths that need updating"
required: false
default: ".github/raise-pr-on-change.json"
prBranch:
description: "The branch to push changes to before submitting a PR"
required: false
targetBranch:
description: "The branch to target when opening a PR"
required: false
prTitle:
description: "The PR title to use"
required: true
prBody:
description: "The PR body to use"
required: true
commitMessage:
description: "The commit message use"
required: false
Action ID: marketplace/appleboy/line-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/line-action
Sending a Line message
| Name | Required | Description |
|---|---|---|
secret |
Required | line channel secret |
token |
Required | line channel access token |
room |
Optional | line room ID |
group |
Optional | line group ID |
message |
Optional | the message contents |
images |
Optional | line images |
videos |
Optional | line videos |
audios |
Optional | line audios |
stickers |
Optional | line stickers |
locations |
Optional | line locations |
delimiter |
Optional | line delimiter, default is :: |
name: 'LINE Message Notify'
description: 'Sending a Line message'
author: 'Bo-Yi Wu'
inputs:
secret:
description: 'line channel secret'
required: true
token:
description: 'line channel access token'
required: true
room:
description: 'line room ID'
group:
description: 'line group ID'
message:
description: 'the message contents'
images:
description: 'line images'
videos:
description: 'line videos'
audios:
description: 'line audios'
stickers:
description: 'line stickers'
locations:
description: 'line locations'
delimiter:
description: 'line delimiter, default is ::'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'message-square'
color: 'gray-dark'
Action ID: marketplace/CodelyTV/no-branches
Author: Unknown
Publisher: CodelyTV
Repository: github.com/CodelyTV/no-branches
Automatically remove any branch
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Required | GitHub token |
name: 'No Branches'
description: 'Automatically remove any branch'
branding:
icon: 'alert-triangle'
color: 'red'
inputs:
GITHUB_TOKEN:
description: 'GitHub token'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.GITHUB_TOKEN }}
Action ID: marketplace/mheap/automatic-approve-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/automatic-approve-action
Automatically approve workflow runs from first time contributors
| Name | Required | Description |
|---|---|---|
token |
Required | The GitHub Token to use |
workflows |
Required | The workflows to automatically approve |
dangerous_files |
Optional | Filenames that prevent the PR being automatically approved |
safe_files |
Optional | Filenames that allow the PR being automatically approved |
name: Automatic Approve
description: Automatically approve workflow runs from first time contributors
runs:
using: node20
main: index.js
branding:
icon: user-check
color: green
inputs:
token:
description: The GitHub Token to use
required: true
workflows:
description: The workflows to automatically approve
required: true
dangerous_files:
description: Filenames that prevent the PR being automatically approved
required: false
safe_files:
description: Filenames that allow the PR being automatically approved
required: false
Action ID: marketplace/github/generate-dependabot-glob-action
Author: Unknown
Publisher: github
Repository: github.com/github/generate-dependabot-glob-action
Creates a `dependabot.yml` dynamically based on glob patterns.
| Name | Required | Description |
|---|---|---|
template-file |
Optional | Location of the file to use as template Default: .github/dependabot.template.yml |
follow-symbolic-links |
Optional | Indicates whether to follow symbolic links (If you want to put your template in a weird place) Default: true |
file-header |
Optional | Header to add to the generated file. ${input-name} will be replaced with the value of the given input. Default: # This file was generated by the "Generate Dependabot Glob" action. Do not edit it directly.
# Make changes to `${template-file}` and a PR will be automatically created.
|
name: 'Generate Dependabot Glob'
branding:
icon: briefcase
color: orange
description: 'Creates a `dependabot.yml` dynamically based on glob patterns.'
inputs:
template-file:
description: 'Location of the file to use as template'
default: .github/dependabot.template.yml
follow-symbolic-links:
description: 'Indicates whether to follow symbolic links (If you want to put your template in a weird place)'
default: 'true'
file-header:
description: 'Header to add to the generated file. ${input-name} will be replaced with the value of the given input.'
default: |
# This file was generated by the "Generate Dependabot Glob" action. Do not edit it directly.
# Make changes to `${template-file}` and a PR will be automatically created.
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/julia-actions/julia-invalidations
Author: Unknown
Publisher: julia-actions
Repository: github.com/julia-actions/julia-invalidations
Evaluate number of invalidations
| Name | Required | Description |
|---|---|---|
test_script |
Optional | Script to test for invalidations. Defaults to `import Package` |
package_name |
Optional | Name of the package. By default, tries to auto-detect from the repo name. |
max_invalidations |
Optional | Maximum number of invalidations to report. Defaults to `0` (no limit) Default: 0 |
| Name | Description |
|---|---|
total |
Total number of invalidations |
deps |
Number of invalidations caused by deps |
name: Invalidations
description: 'Evaluate number of invalidations'
inputs:
test_script:
description: 'Script to test for invalidations. Defaults to `import Package`'
required: false
default: ''
package_name:
description: 'Name of the package. By default, tries to auto-detect from the repo name.'
required: false
default: ''
type: string
max_invalidations:
description: 'Maximum number of invalidations to report. Defaults to `0` (no limit)'
required: false
default: '0'
type: number
outputs:
total:
description: "Total number of invalidations"
value: ${{ steps.invs.outputs.total }}
deps:
description: "Number of invalidations caused by deps"
value: ${{ steps.invs.outputs.deps }}
runs:
using: "composite"
steps:
- name: Get package name
id: info
run: |
REPONAME="${{ github.event.repository.name }}"
if [[ '${{ inputs.package_name }}' == '' ]]; then
PACKAGENAME=${REPONAME%.jl}
else
PACKAGENAME="${{ inputs.package_name }}"
fi
echo "packagename=$PACKAGENAME" >> $GITHUB_OUTPUT
if [[ '${{ inputs.test_script }}' == '' ]]; then
TESTSCRIPT="import ${PACKAGENAME}"
else
TESTSCRIPT="${{ inputs.test_script }}"
fi
echo "testscript=$TESTSCRIPT" >> $GITHUB_OUTPUT
shell: bash
- name: Install SnoopCompile tools
run: |
import Pkg;
pkgs = [
Pkg.PackageSpec(name = "SnoopCompile", version = "3", uuid = "aa65fe97-06da-5843-b5b1-d5d13cad87d2"),
Pkg.PackageSpec(name = "SnoopCompileCore", uuid = "e2b509da-e806-4183-be48-004708413034"),
Pkg.PackageSpec(name = "PrettyTables", uuid = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"),
]
Pkg.add(pkgs)
shell: julia --project {0}
- name: Load package on branch
id: invs
run: |
using SnoopCompileCore: @snoop_invalidations
invalidations = @snoop_invalidations begin ${{ steps.info.outputs.testscript }} end
using SnoopCompile: SnoopCompile, filtermod, invalidation_trees, uinvalidated
inv_owned = length(filtermod(${{ steps.info.outputs.packagename }}, invalidation_trees(invalidations)))
inv_total = length(uinvalidated(invalidations))
inv_deps = inv_total - inv_owned
@show inv_total, inv_deps
# Report invalidations summary:
import PrettyTables # needed for `report_invalidations` to be defined
SnoopCompile.report_invalidations(;
invalidations,
process_filename = x -> last(split(x, ".julia/packages/")),
n_rows = ${{ inputs.max_invalidations }},
)
# Set outputs
using Printf
open(ENV["GITHUB_OUTPUT"], "a") do io
println(io, @sprintf("total=%09d", inv_total))
println(io, @sprintf("deps=%09d", inv_deps))
end
shell: julia --color=yes --project=. {0}
Action ID: marketplace/lukka/run-cmake
Author: Luca Cappa https://github.com/lukka
Publisher: lukka
Repository: github.com/lukka/run-cmake
Run CMake with CMakePreset.json to configure, build, package and test C/C++ source code.
| Name | Required | Description |
|---|---|---|
cmakeListsTxtPath |
Optional | Path to CMakeLists.txt. Default: ${{ github.workspace }}/CMakeLists.txt |
workflowPreset |
Optional | The name of the workflow preset. Optional, it cannot be used with any other preset input. This value is stored in the WORKFLOW_PRESET_NAME environment variable, and used by the default value of 'workflowPresetCmdString' input. |
configurePreset |
Optional | The name of the configure preset. Optional, but at least one of the preset input must be provided. This value is stored in the CONFIGURE_PRESET_NAME environment variable, and used by the default value of 'configurePresetCmdString' input. |
buildPreset |
Optional | The name of the build preset. Optional, but at least one of the preset input must be provided. This value is stored in the BUILD_PRESET_NAME environment variable, and used by the default value of 'buildPresetCmdString' input.' |
testPreset |
Optional | The name of the test preset (ctest). Optional, but at least one of the preset input must be provided. This value is stored in the TEST_PRESET_NAME environment variable, and used by the default value of 'testPresetCmdString' input.' |
packagePreset |
Optional | The name of the package preset (cpack). Optional, but at least one of the preset input must be provided. This value is stored in the PACKAGE_PRESET_NAME environment variable, and used by the default value of 'packagePresetCmdString' input.' |
configurePresetAdditionalArgs |
Optional | A string representing list of additional arguments for configuring. Optional. Useful when specifing additional variables such as, e.g., ['-DVARIABLE=NAME', '-DANOTHERVARIABLE=ANOTHERNAME'] Default: [] |
buildPresetAdditionalArgs |
Optional | A string representing list of additional arguments for building. Optional. Useful when specifing the config to build with a multi configuration generator, e.g., ['--config DEBUG'] Default: [] |
testPresetAdditionalArgs |
Optional | A string representing list of additional arguments for testing. Optional. Useful when specifing the config to test with a multi configuration generator, e.g., ['--config DEBUG'] Default: [] |
packagePresetAdditionalArgs |
Optional | A string representing list of additional arguments for cpack. Optional. Default: [] |
useShell |
Optional | Specify which shell to be used when launching commands. 'true' means the default shell is used. 'false' means no shell is used. It also can be an absolute with arguments of the shell to spawn commands with. Default: True |
logCollectionRegExps |
Optional | Specifies a semicolon separated list of regular expressions that are used to identify log file paths in the workflow output. A regular expression must have a single capturing group, that is a single pair of parenthesis such as 'See also (.+.log)'. When a match occurs, the content of the file is written into the workflow output for disclosing its content to the user. The default regular expressions are for CMake's and vcpkg's log files. Default: \s*"(.+CMakeOutput\.log)"\.\s*;\s*"(.+CMakeError\.log)"\.\s*;\s*(.+out\.log)\s*;\s+(.+err\.log)\s*;\s*(.+vcpkg.+\.log)\s* |
workflowPresetCmdString |
Optional | The CMake command format string to run the workflow steps. Default: [`--workflow`, `--preset`, `$[env.WORKFLOW_PRESET_NAME]`, `--fresh`] |
configurePresetCmdString |
Optional | The CMake command format string to configure and generate project files. Default: [`--preset`, `$[env.CONFIGURE_PRESET_NAME]`] |
buildPresetCmdString |
Optional | The CMake command format string to run the build. Default: [`--build`, `--preset`, `$[env.BUILD_PRESET_NAME]`] |
testPresetCmdString |
Optional | The CTest command format string to run test. Default: [`--preset`, `$[env.TEST_PRESET_NAME]`] |
packagePresetCmdString |
Optional | The CPack command format string to package the project. Default: [`--preset`, `$[env.PACKAGE_PRESET_NAME]`] |
runVcpkgEnvFormatString |
Optional | Specify the command line to dump the environment variables with the 'vcpkg env' command. This command is only used when setting up the environment for MSVC on Windows. Default: [`env`, `--bin`, `--include`, `--tools`, `--python`, `--triplet`, `$[env.VCPKG_DEFAULT_TRIPLET]`, `set`] |
# Copyright (c) 2019-2020-2021-2023 Luca Cappa
# Released under the term specified in file LICENSE.txt
# SPDX short identifier: MIT
name: 'run-cmake'
description: 'Run CMake with CMakePreset.json to configure, build, package and test C/C++ source code.'
author: 'Luca Cappa https://github.com/lukka'
inputs:
# The following 'inputs' are commonly defined by the user in the workflow.
cmakeListsTxtPath:
default: "${{ github.workspace }}/CMakeLists.txt"
required: false
description: "Path to CMakeLists.txt."
workflowPreset:
default: ""
required: false
description: "The name of the workflow preset. Optional, it cannot be used with any other preset input. This value is stored in the WORKFLOW_PRESET_NAME environment variable, and used by the default value of 'workflowPresetCmdString' input."
configurePreset:
default: ""
required: false
description: "The name of the configure preset. Optional, but at least one of the preset input must be provided. This value is stored in the CONFIGURE_PRESET_NAME environment variable, and used by the default value of 'configurePresetCmdString' input."
buildPreset:
default: ""
required: false
description: "The name of the build preset. Optional, but at least one of the preset input must be provided. This value is stored in the BUILD_PRESET_NAME environment variable, and used by the default value of 'buildPresetCmdString' input.'"
testPreset:
default: ""
required: false
description: "The name of the test preset (ctest). Optional, but at least one of the preset input must be provided. This value is stored in the TEST_PRESET_NAME environment variable, and used by the default value of 'testPresetCmdString' input.'"
packagePreset:
default: ""
required: false
description: "The name of the package preset (cpack). Optional, but at least one of the preset input must be provided. This value is stored in the PACKAGE_PRESET_NAME environment variable, and used by the default value of 'packagePresetCmdString' input.'"
configurePresetAdditionalArgs:
default: "[]"
required: false
description: "A string representing list of additional arguments for configuring. Optional. Useful when specifing additional variables such as, e.g., ['-DVARIABLE=NAME', '-DANOTHERVARIABLE=ANOTHERNAME']"
buildPresetAdditionalArgs:
default: "[]"
required: false
description: "A string representing list of additional arguments for building. Optional. Useful when specifing the config to build with a multi configuration generator, e.g., ['--config DEBUG']"
testPresetAdditionalArgs:
default: "[]"
required: false
description: "A string representing list of additional arguments for testing. Optional. Useful when specifing the config to test with a multi configuration generator, e.g., ['--config DEBUG']"
packagePresetAdditionalArgs:
default: "[]"
required: false
description: "A string representing list of additional arguments for cpack. Optional."
# The following inputs are rarely set by the user, since the default values suffice the most common scenarios.
useShell:
default: true
required: false
description: "Specify which shell to be used when launching commands. 'true' means the default shell is used. 'false' means no shell is used. It also can be an absolute with arguments of the shell to spawn commands with."
logCollectionRegExps:
default: "\\s*\"(.+CMakeOutput\\.log)\"\\.\\s*;\\s*\"(.+CMakeError\\.log)\"\\.\\s*;\\s*(.+out\\.log)\\s*;\\s+(.+err\\.log)\\s*;\\s*(.+vcpkg.+\\.log)\\s*"
required: false
description: "Specifies a semicolon separated list of regular expressions that are used to identify log file paths in the workflow output. A regular expression must have a single capturing group, that is a single pair of parenthesis such as 'See also (.+.log)'. When a match occurs, the content of the file is written into the workflow output for disclosing its content to the user. The default regular expressions are for CMake's and vcpkg's log files."
workflowPresetCmdString:
default: "[`--workflow`, `--preset`, `$[env.WORKFLOW_PRESET_NAME]`, `--fresh`]"
required: false
description: "The CMake command format string to run the workflow steps."
configurePresetCmdString:
default: "[`--preset`, `$[env.CONFIGURE_PRESET_NAME]`]"
required: false
description: "The CMake command format string to configure and generate project files."
buildPresetCmdString:
default: "[`--build`, `--preset`, `$[env.BUILD_PRESET_NAME]`]"
required: false
description: "The CMake command format string to run the build."
testPresetCmdString:
default: "[`--preset`, `$[env.TEST_PRESET_NAME]`]"
required: false
description: "The CTest command format string to run test."
packagePresetCmdString:
default: "[`--preset`, `$[env.PACKAGE_PRESET_NAME]`]"
required: false
description: "The CPack command format string to package the project."
runVcpkgEnvFormatString:
default: "[`env`, `--bin`, `--include`, `--tools`, `--python`, `--triplet`, `$[env.VCPKG_DEFAULT_TRIPLET]`, `set`]"
required: false
description: "Specify the command line to dump the environment variables with the 'vcpkg env' command. This command is only used when setting up the environment for MSVC on Windows."
runs:
using: 'node20'
main: './dist/index.js'
branding:
icon: 'terminal'
color: 'green'
Action ID: marketplace/julia-actions/cache
Author: Sascha Mann, Rik Huijzer, and contributors
Publisher: julia-actions
Repository: github.com/julia-actions/cache
Cache Julia using actions/cache
| Name | Required | Description |
|---|---|---|
cache-name |
Optional | The cache key prefix. The key body automatically includes the OS and, unless disabled, the matrix vars. Include any other parameters/details in this prefix to ensure one unique cache key per concurrent job type. Default: julia-cache;workflow=${{ github.workflow }};job=${{ github.job }} |
include-matrix |
Optional | Whether to include the matrix values when constructing the cache key. Default: true |
depot |
Optional | Path to a Julia depot directory where cached data will be saved to and restored from. |
cache-artifacts |
Optional | Whether to cache the depot's `artifacts` directory. Default: true |
cache-packages |
Optional | Whether to cache the depot's `packages` directory. Default: true |
cache-registries |
Optional | Whether to cache the depot's `registries` directory. Default: true |
cache-compiled |
Optional | Whether to cache the depot's `compiled` directory. Default: true |
cache-scratchspaces |
Optional | Whether to cache the depot's `scratchspaces` directory. Default: true |
cache-logs |
Optional | Whether to cache the depot's `logs` directory. This helps automatic `Pkg.gc()` keep the cache size down. Default: true |
delete-old-caches |
Optional | Whether to delete old caches for the given key. Default: true |
token |
Optional | A GitHub PAT. Requires `repo` scope to enable the deletion of old caches. Default: ${{ github.token }} |
| Name | Description |
|---|---|
cache-hit |
A boolean value to indicate an exact match was found for the primary key. Returns "" when the key is new. Forwarded from actions/cache. |
cache-paths |
The paths that were cached |
cache-key |
The full cache key used |
name: 'Cache Julia artifacts, packages and registry'
description: 'Cache Julia using actions/cache'
author: 'Sascha Mann, Rik Huijzer, and contributors'
branding:
icon: 'archive'
color: 'purple'
inputs:
cache-name:
description: >-
The cache key prefix. The key body automatically includes the OS and, unless disabled, the matrix vars.
Include any other parameters/details in this prefix to ensure one unique cache key per concurrent job type.
default: julia-cache;workflow=${{ github.workflow }};job=${{ github.job }}
include-matrix:
description: Whether to include the matrix values when constructing the cache key.
default: 'true'
depot:
description: Path to a Julia depot directory where cached data will be saved to and restored from.
default: ''
cache-artifacts:
description: Whether to cache the depot's `artifacts` directory.
default: 'true'
cache-packages:
description: Whether to cache the depot's `packages` directory.
default: 'true'
cache-registries:
description: Whether to cache the depot's `registries` directory.
default: 'true'
cache-compiled:
description: Whether to cache the depot's `compiled` directory.
default: 'true'
cache-scratchspaces:
description: Whether to cache the depot's `scratchspaces` directory.
default: 'true'
cache-logs:
description: Whether to cache the depot's `logs` directory. This helps automatic `Pkg.gc()` keep the cache size down.
default: 'true'
delete-old-caches:
description: Whether to delete old caches for the given key.
default: 'true'
token:
description: A GitHub PAT. Requires `repo` scope to enable the deletion of old caches.
default: ${{ github.token }}
outputs:
cache-hit:
description: A boolean value to indicate an exact match was found for the primary key. Returns "" when the key is new. Forwarded from actions/cache.
value: ${{ steps.hit.outputs.cache-hit }}
cache-paths:
description: The paths that were cached
value: ${{ steps.paths.outputs.cache-paths }}
cache-key:
description: The full cache key used
value: ${{ steps.keys.outputs.key }}
runs:
using: 'composite'
steps:
- name: Install jq
uses: dcarbone/install-jq-action@b7ef57d46ece78760b4019dbc4080a1ba2a40b45 # v3.2.0
with:
force: false # Skip install when an existing `jq` is present
- id: paths
run: |
if [ -n "${{ inputs.depot }}" ]; then
depot="${{ inputs.depot }}"
elif [ -n "$JULIA_DEPOT_PATH" ]; then
# Use the first depot path
depot=$(echo $JULIA_DEPOT_PATH | cut -d$PATH_DELIMITER -f1)
else
depot="~/.julia"
fi
if [[ "$OSTYPE" == "msys" || "$OSTYPE" == "cygwin" ]]; then
depot="${depot/#\~/$USERPROFILE}" # Windows paths
depot="${depot//\\//}" # Replace backslashes with forward slashes
else
depot="${depot/#\~/$HOME}" # Unix-like paths
fi
echo "depot=$depot" | tee -a "$GITHUB_OUTPUT"
cache_paths=()
artifacts_path="${depot}/artifacts"
[ "${{ inputs.cache-artifacts }}" = "true" ] && cache_paths+=("$artifacts_path")
packages_path="${depot}/packages"
[ "${{ inputs.cache-packages }}" = "true" ] && cache_paths+=("$packages_path")
registries_path="${depot}/registries"
if [ "${{ inputs.cache-registries }}" = "true" ]; then
if [ ! -d "${registries_path}" ]; then
cache_paths+=("$registries_path")
else
echo "::warning::Julia depot registries already exist. Skipping restoring of cached registries to avoid potential merge conflicts when updating. Please ensure that \`julia-actions/cache\` precedes any workflow steps which add registries."
fi
fi
compiled_path="${depot}/compiled"
[ "${{ inputs.cache-compiled }}" = "true" ] && cache_paths+=("$compiled_path")
scratchspaces_path="${depot}/scratchspaces"
[ "${{ inputs.cache-scratchspaces }}" = "true" ] && cache_paths+=("$scratchspaces_path")
logs_path="${depot}/logs"
[ "${{ inputs.cache-logs }}" = "true" ] && cache_paths+=("$logs_path")
{
echo "cache-paths<<EOF"
printf "%s\n" "${cache_paths[@]}"
echo "EOF"
} | tee -a "$GITHUB_OUTPUT"
shell: bash
env:
PATH_DELIMITER: ${{ runner.OS == 'Windows' && ';' || ':' }}
- name: Generate Keys
id: keys
run: |
# `matrix_key` joins all of matrix keys/values (including nested objects) to ensure that concurrent runs each use a unique cache key.
# When `matrix` isn't set for the job then `MATRIX_JSON=null`.
if [ "${{ inputs.include-matrix }}" == "true" ] && [ "$MATRIX_JSON" != "null" ]; then
matrix_key=$(echo "$MATRIX_JSON" | jq 'paths(type != "object") as $p | ($p | join("-")) + "=" + (getpath($p) | tostring)' | jq -rs 'join(";") | . + ";"')
fi
restore_key="${{ inputs.cache-name }};os=${{ runner.os }};${matrix_key}"
# URL encode any restricted characters:
# https://github.com/actions/toolkit/blob/5430c5d84832076372990c7c27f900878ff66dc9/packages/cache/src/cache.ts#L38-L43
restore_key=$(sed 's/,/%2C/g' <<<"${restore_key}")
key="${restore_key}run_id=${{ github.run_id }};run_attempt=${{ github.run_attempt }}"
echo "restore-key=${restore_key}" >> $GITHUB_OUTPUT
echo "key=${key}" >> $GITHUB_OUTPUT
shell: bash
env:
MATRIX_JSON: ${{ toJSON(matrix) }}
- uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
id: cache
with:
path: |
${{ steps.paths.outputs.cache-paths }}
key: ${{ steps.keys.outputs.key }}
restore-keys: ${{ steps.keys.outputs.restore-key }}
enableCrossOsArchive: false
# if it wasn't restored make the depot anyway as a signal that this action ran
# for other julia actions to check, like https://github.com/julia-actions/julia-buildpkg/pull/41
- name: make depot if not restored, then list depot directory sizes
run: |
mkdir -p ${{ steps.paths.outputs.depot }}
du -shc ${{ steps.paths.outputs.depot }}/* || true
shell: bash
# issue https://github.com/julia-actions/cache/issues/110
# Pkg may not run `Registry.update()` if a manifest exists, which may exist because of a
# `Pkg.dev` call or because one is added to the repo. So be safe and update cached registries here.
# Older (~v1.0) versions of julia that don't have `Pkg.Registry.update()` seem to always update registries in
# Pkg operations. So this is only necessary for newer julia versions.
- name: Update any cached registries
if: ${{ inputs.cache-registries == 'true' }}
continue-on-error: true
run: |
if [ -d "${{ steps.paths.outputs.depot }}/registries" ] && [ -n "$(ls -A "${{ steps.paths.outputs.depot }}/registries")" ]; then
echo "Registries directory exists and is non-empty. Updating any registries"
julia -e "import Pkg; isdefined(Pkg, :Registry) && Pkg.Registry.update();"
else
echo "Registries directory does not exist or is empty. Skipping registry update"
fi
shell: bash
# GitHub actions cache entries are immutable and cannot be updated. In order to have both the Julia
# depot cache be up-to-date and avoid storing redundant cache entries we'll manually cleanup old
# cache entries before the new cache is saved. However, we need to be careful with our manual
# cleanup as otherwise we can cause cache misses for jobs which would have normally had a cache hit.
# Some scenarios to keep in mind include:
#
# - Job failures result in the post-action for `actions/cache` being skipped. If we delete all cache
# entries for the branch we may have no cache entry available for the next run.
# - We should avoid deleting old cache entries for the default branch since these entries serve as
# the fallback if no earlier cache entry exists on a branch. We can rely on GitHub's default cache
# eviction policy here which will remove the oldest cache entry first.
#
# References:
# - https://github.com/actions/cache/blob/main/tips-and-workarounds.md#update-a-cache
# - https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy
# Not windows
- uses: pyTooling/Actions/with-post-step@c2282e4d63fb9f7b3e8f4c672b01d9dab7af22c3 # v6.7.0
if: ${{ inputs.delete-old-caches != 'false' &&
github.ref != format('refs/heads/{0}', github.event.repository.default_branch) &&
runner.OS != 'Windows' }}
with:
# seems like there has to be a `main` step in this action. Could list caches for info if we wanted
# main: julia ${{ github.action_path }}/handle_caches.jl "${{ github.repository }}" "list"
main: echo ""
post: julia $GITHUB_ACTION_PATH/handle_caches.jl rm "${{ github.repository }}" "${{ steps.keys.outputs.restore-key }}" "${{ github.ref }}" "${{ inputs.delete-old-caches != 'required' }}"
env:
GH_TOKEN: ${{ inputs.token }}
# Windows (because this action uses command prompt on windows)
- uses: pyTooling/Actions/with-post-step@c2282e4d63fb9f7b3e8f4c672b01d9dab7af22c3 # v6.7.0
if: ${{ inputs.delete-old-caches != 'false' &&
github.ref != format('refs/heads/{0}', github.event.repository.default_branch) &&
runner.OS == 'Windows' }}
with:
main: echo ""
post: cd %GITHUB_ACTION_PATH% && julia handle_caches.jl rm "${{ github.repository }}" "${{ steps.keys.outputs.restore-key }}" "${{ github.ref }}" "${{ inputs.delete-old-caches != 'required' }}"
env:
GH_TOKEN: ${{ inputs.token }}
- id: hit
run: echo "cache-hit=$CACHE_HIT" >> $GITHUB_OUTPUT
env:
CACHE_HIT: ${{ steps.cache.outputs.cache-hit }}
shell: bash
Action ID: marketplace/github/dependency-submission-toolkit
Author: Unknown
Publisher: github
Repository: github.com/github/dependency-submission-toolkit
Example action using the dependency-submission-toolkit and npm
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub Personal Access Token (PAT). Defaults to PAT provided by Action runner Default: ${{ github.token }} |
npm-package-directory |
Required | NPM package directory (directory with package.json) Default: ./ |
name: 'Example Dependency Submission Action'
description: 'Example action using the dependency-submission-toolkit and npm'
inputs:
token:
description: "GitHub Personal Access Token (PAT). Defaults to PAT provided by Action runner"
required: false
default: ${{ github.token }}
npm-package-directory:
description: 'NPM package directory (directory with package.json)'
required: true
default: './'
runs:
using: 'node20'
main: 'example/dist/index.js'
Action ID: marketplace/stefanzweifel/oh-dear-request-run-action
Author: Stefan Zweifel <stefan@stefanzweifel.dev>
Publisher: stefanzweifel
Repository: github.com/stefanzweifel/oh-dear-request-run-action
Trigger a new run for an Oh Dear site.
| Name | Required | Description |
|---|---|---|
oh_dear_token |
Required | Valid API token to communicate with the Oh Dear API. |
check_id |
Required | The ID of the check that should run. |
name: Request Run for Oh Dear
description: Trigger a new run for an Oh Dear site.
author: Stefan Zweifel <stefan@stefanzweifel.dev>
inputs:
oh_dear_token:
description: Valid API token to communicate with the Oh Dear API.
required: true
check_id:
description: The ID of the check that should run.
required: true
runs:
using: "composite"
steps:
- run: |
curl -X POST "https://ohdear.app/api/checks/${{ inputs.CHECK_ID }}/request-run" \
-H "Authorization: Bearer ${{ inputs.OH_DEAR_TOKEN }}" \
-H "Accept: application/json" \
-H "Content-Type: application/json"
shell: bash
branding:
icon: zap
color: blue
Action ID: marketplace/actions/hello-world-docker-action
Author: GitHub Actions
Publisher: actions
Repository: github.com/actions/hello-world-docker-action
Greet someone and record the time
| Name | Required | Description |
|---|---|---|
who-to-greet |
Required | Who to greet Default: World |
| Name | Description |
|---|---|
time |
The time we greeted you |
name: Hello, World!
description: Greet someone and record the time
author: GitHub Actions
# Define your inputs here.
inputs:
who-to-greet:
description: Who to greet
required: true
default: World
# Define your outputs here.
outputs:
time:
description: The time we greeted you
runs:
using: docker
image: Dockerfile
env:
INPUT_WHO_TO_GREET: ${{ inputs.who-to-greet }}
Action ID: marketplace/mheap/submodule-sync-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/submodule-sync-action
Automatically update a submodule to the latest commit
| Name | Required | Description |
|---|---|---|
token |
Optional | The GitHub API token to use Default: ${{ github.token }} |
path |
Required | The path in the repo to update |
ref |
Required | The branch name to check for updates in the remote repo |
pr_branch |
Required | The name of the branch to use when creating a pull request |
target_branch |
Required | The name of the branch that the PR should be raised against |
base_branch |
Optional | The name of the branch that the PR should be based on |
pr_body |
Optional | Text to include in the generated PR's description |
pr_labels |
Optional | The labels that the PR will have, newline-seperated |
name: Submodule Sync
description: Automatically update a submodule to the latest commit
inputs:
token:
description: The GitHub API token to use
default: ${{ github.token }}
required: false
path:
description: The path in the repo to update
required: true
ref:
description: The branch name to check for updates in the remote repo
required: true
pr_branch:
description: The name of the branch to use when creating a pull request
required: true
target_branch:
description: The name of the branch that the PR should be raised against
required: true
base_branch:
description: The name of the branch that the PR should be based on
required: false
pr_body:
description: Text to include in the generated PR's description
required: false
default: ""
pr_labels:
description: The labels that the PR will have, newline-seperated
required: false
default: ""
branding:
icon: activity
color: orange
runs:
using: docker
image: "Dockerfile"
Action ID: marketplace/mheap/github-action-required-labels
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-required-labels
Require labels to be added to a pull request before merging
| Name | Required | Description |
|---|---|---|
token |
Optional | The GitHub token to use when calling the API Default: ${{ github.token }} |
labels |
Required | Comma or new line separated list of labels to match |
mode |
Required | The mode of comparison to use. One of: exactly, minimum, maximum |
count |
Required | The required number of labels to match |
message |
Optional | The message to log and to add to the PR (if add_comment is true). See the README for available placeholders Default: Label error. Requires {{errorString}} {{count}} of: {{ provided }}. Found: {{ applied }} |
add_comment |
Optional | Add a comment to the PR if required labels are missing Default: false |
exit_type |
Optional | The exit type of the action. One of: failure, success |
use_regex |
Optional | Evaluate the values in `labels` as regular expressions Default: false |
| Name | Description |
|---|---|
labels |
The labels that were matched as a comma separated string |
status |
The status of the action. |
name: Require Labels
description: Require labels to be added to a pull request before merging
runs:
using: docker
image: Dockerfile
branding:
icon: check-square
color: blue
outputs:
labels:
description: "The labels that were matched as a comma separated string"
status:
description: "The status of the action."
inputs:
token:
description: The GitHub token to use when calling the API
default: ${{ github.token }}
required: false
labels:
description: "Comma or new line separated list of labels to match"
required: true
mode:
description: "The mode of comparison to use. One of: exactly, minimum, maximum"
required: true
count:
description: "The required number of labels to match"
required: true
message:
description: "The message to log and to add to the PR (if add_comment is true). See the README for available placeholders"
default: "Label error. Requires {{errorString}} {{count}} of: {{ provided }}. Found: {{ applied }}"
required: false
add_comment:
description: "Add a comment to the PR if required labels are missing"
default: "false"
required: false
exit_type:
description: "The exit type of the action. One of: failure, success"
required: false
use_regex:
description: Evaluate the values in `labels` as regular expressions
default: "false"
Action ID: marketplace/sobolevn/treebeard
Author: treebeardtech
Publisher: sobolevn
Repository: github.com/sobolevn/treebeard
Automatically containerise and run notebooks
| Name | Required | Description |
|---|---|---|
api-key |
Optional | treebeard api key |
notebooks |
Optional | notebooks to run Default: **/*ipynb |
docker-username |
Optional | |
docker-password |
Optional | |
docker-image-name |
Optional | the name of the image built by treebeard |
docker-registry-prefix |
Optional | the prefix of your docker image name, use instead of docker-image-name to generate a default image name |
use-docker |
Optional | run treebeard inside repo2docker Default: true |
debug |
Optional | Enable debug logging Default: false |
path |
Optional | Path of the repo to run Default: . |
name: "treebeard CI"
author: "treebeardtech"
description: "Automatically containerise and run notebooks"
inputs:
api-key:
description: "treebeard api key"
required: false
notebooks:
description: "notebooks to run"
required: false
default: "**/*ipynb"
docker-username:
description: ""
required: false
docker-password:
description: ""
required: false
docker-image-name:
description: "the name of the image built by treebeard"
required: false
docker-registry-prefix:
description: "the prefix of your docker image name, use instead of docker-image-name to generate a default image name"
required: false
use-docker:
description: "run treebeard inside repo2docker"
required: false
default: "true"
debug:
description: "Enable debug logging"
required: false
default: "false"
path:
description: "Path of the repo to run"
required: false
default: "."
runs:
using: "node12"
main: "dist/index.js"
Action ID: marketplace/DeLaGuardo/setup-graalvm
Author: DeLaGuardo
Publisher: DeLaGuardo
Repository: github.com/DeLaGuardo/setup-graalvm
Setup your runner with GraalVM
| Name | Required | Description |
|---|---|---|
graalvm-version |
Optional | (deparecated) The GraalVM version to make available on the path. |
graalvm |
Optional | The GraalVM version, defaults to 21.0.0.2 Default: 21.0.0.2 |
java |
Optional | The Java version GraalVM is based on, defaults to java8 Default: java8 |
arch |
Optional | The desired architecture. Options are - "amd64" (default), and "aarch64" (for linux only) Default: amd64 |
personal-token |
Optional | https://docs.github.com/en/actions/security-guides/automatic-token-authentication |
name: 'Setup GraalVM environment'
description: 'Setup your runner with GraalVM'
author: 'DeLaGuardo'
branding:
icon: 'gift'
color: 'blue'
inputs:
graalvm-version:
description: '(deparecated) The GraalVM version to make available on the path.'
graalvm:
description: 'The GraalVM version, defaults to 21.0.0.2'
default: '21.0.0.2'
java:
description: 'The Java version GraalVM is based on, defaults to java8'
default: 'java8'
arch:
description: 'The desired architecture. Options are - "amd64" (default), and "aarch64" (for linux only)'
default: 'amd64'
personal-token:
description: 'https://docs.github.com/en/actions/security-guides/automatic-token-authentication'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/atlassian/gajira-todo
Author: Unknown
Publisher: atlassian
Repository: github.com/atlassian/gajira-todo
Create Jira issue for TODO comments
| Name | Required | Description |
|---|---|---|
project |
Required | Key of the project |
issuetype |
Required | Type of the issue to be created. Example: 'Incident' |
description |
Optional | Issue description |
| Name | Description |
|---|---|
issues |
Well-formed JSON array containing keys of all newly created issues |
name: Jira issue from TODO
description: Create Jira issue for TODO comments
branding:
icon: 'check-square'
color: 'blue'
inputs:
project:
description: Key of the project
required: true
issuetype:
description: "Type of the issue to be created. Example: 'Incident'"
required: true
description:
description: Issue description
required: false
outputs:
issues:
description: Well-formed JSON array containing keys of all newly created issues
runs:
using: 'node16'
main: './dist/index.js'
Action ID: marketplace/CodelyTV/check-critical-files
Author: Unknown
Publisher: CodelyTV
Repository: github.com/CodelyTV/check-critical-files
Check for critical files in your commits and warns you
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Required | GitHub token |
critical_files |
Required | A newline separated list of critical files |
critical_message |
Optional | Message to show when listing all commited critical files Default: Caution! you have commited some critical files |
name: 'Check Critical Files'
description: 'Check for critical files in your commits and warns you'
branding:
icon: 'alert-circle'
color: 'gray-dark'
inputs:
GITHUB_TOKEN:
description: 'GitHub token'
required: true
critical_files:
description: 'A newline separated list of critical files'
required: true
critical_message:
description: 'Message to show when listing all commited critical files'
required: false
default: 'Caution! you have commited some critical files'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.GITHUB_TOKEN }}
- ${{ inputs.critical_files }}
- ${{ inputs.critical_message }}
Action ID: marketplace/julia-actions/julia-runtest
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-runtest
Run the tests in a Julia package
| Name | Required | Description |
|---|---|---|
check_bounds |
Optional | Value determining which bounds checking setting to use. Options: yes | no | auto. Default value: yes. Default: yes |
coverage |
Optional | Value determining whether to test with coverage or not. Options: true | false. Default value: true. Default: true |
depwarn |
Optional | Value passed to the --depwarn flag. Options: yes | no | error. Default value: yes. Default: yes |
force_latest_compatible_version |
Optional | If true, then, for each [compat] entry in the active project, only allow the latest compatible version. If the value is auto and the pull request has been opened by Dependabot or CompatHelper, then force_latest_compatible_version will be set to true, otherwise it will be set to false. Options: true | false | auto. Default value: auto. Default: auto |
inline |
Optional | Value passed to the --inline flag. Options: yes | no. Default value: yes. Default: yes |
prefix |
Optional | Value inserted in front of the julia command, e.g. for running xvfb-run julia [...] |
project |
Optional | Value passed to the --project flag. The default value is the repository root: "@." Default: @. |
annotate |
Optional | Whether or not to attempt to create GitHub annotations to show test failures inline. Only effective on Julia 1.8+. Default: false |
compiled_modules |
Optional | Whether to run tests with `compiled-modules`. For possible values, refer to https://docs.julialang.org/en/v1/manual/command-line-interface/#command-line-interface Default: yes |
allow_reresolve |
Optional | Whether to allow re-resolving of package versions in the test environment. Only effective on Julia 1.9+. Options: true | false. Default value: true Default: true |
test_args |
Optional | Arguments string that is passed on to test. |
name: 'Run Julia package tests'
description: 'Run the tests in a Julia package'
author: 'David Anthoff'
branding:
icon: 'aperture'
color: 'gray-dark'
inputs:
check_bounds:
description: 'Value determining which bounds checking setting to use. Options: yes | no | auto. Default value: yes.'
default: 'yes'
coverage:
description: 'Value determining whether to test with coverage or not. Options: true | false. Default value: true.'
default: 'true'
depwarn:
description: 'Value passed to the --depwarn flag. Options: yes | no | error. Default value: yes.'
default: 'yes'
force_latest_compatible_version:
description: 'If true, then, for each [compat] entry in the active project, only allow the latest compatible version. If the value is auto and the pull request has been opened by Dependabot or CompatHelper, then force_latest_compatible_version will be set to true, otherwise it will be set to false. Options: true | false | auto. Default value: auto.'
default: 'auto'
inline:
description: 'Value passed to the --inline flag. Options: yes | no. Default value: yes.'
default: 'yes'
prefix:
description: 'Value inserted in front of the julia command, e.g. for running xvfb-run julia [...]'
default: ''
required: false
project:
description: 'Value passed to the --project flag. The default value is the repository root: "@."'
default: '@.'
annotate:
description: 'Whether or not to attempt to create GitHub annotations to show test failures inline. Only effective on Julia 1.8+.'
default: 'false'
compiled_modules:
description: 'Whether to run tests with `compiled-modules`. For possible values, refer to https://docs.julialang.org/en/v1/manual/command-line-interface/#command-line-interface'
default: 'yes'
allow_reresolve:
description: 'Whether to allow re-resolving of package versions in the test environment. Only effective on Julia 1.9+. Options: true | false. Default value: true'
default: 'true'
test_args:
description: 'Arguments string that is passed on to test.'
default: ''
runs:
using: 'composite'
steps:
- name: Set and export registry flavor preference
run: echo "JULIA_PKG_SERVER_REGISTRY_PREFERENCE=${JULIA_PKG_SERVER_REGISTRY_PREFERENCE:-eager}" >> ${GITHUB_ENV}
shell: bash
- name: Install dependencies in their own (shared) environment
run: |
# Functionality only currently works on a narrow range of Julia versions... see #76
if v"1.8pre" < VERSION < v"1.9.0-beta3"
using Pkg
Pkg.activate("tests-logger-env"; shared=true)
Pkg.add(Pkg.PackageSpec(name="GitHubActions", version="0.1"))
end
shell: julia --color=yes {0}
if: inputs.annotate == 'true'
- run: |
# The Julia command that will be executed
julia_cmd=( julia --color=yes --inline=${{ inputs.inline }} --project=${{ inputs.project }} -e 'include(joinpath(ENV["GITHUB_ACTION_PATH"], "test_harness.jl"))' -- ${{inputs.test_args}} )
# Add the prefix in front of the command if there is one
prefix=( ${{ inputs.prefix }} )
[[ -n ${prefix[*]} ]] && julia_cmd=( "${prefix[@]}" "${julia_cmd[@]}" )
# Run the Julia command
echo "::debug::Executing Julia: ${julia_cmd[*]}"
"${julia_cmd[@]}"
shell: bash
env:
ANNOTATE: ${{ inputs.annotate }}
COVERAGE: ${{ inputs.coverage }}
FORCE_LATEST_COMPATIBLE_VERSION: ${{ inputs.force_latest_compatible_version }}
CHECK_BOUNDS: ${{ inputs.check_bounds }}
COMPILED_MODULES: ${{ inputs.compiled_modules }}
ALLOW_RERESOLVE: ${{ inputs.allow_reresolve }}
DEPWARN: ${{ inputs.depwarn }}
Action ID: marketplace/vn7n24fzkq/github-action-test
Author: vn7n24fzkq
Publisher: vn7n24fzkq
Repository: github.com/vn7n24fzkq/github-action-test
Generate profile summary cards and commit to default branch
| Name | Required | Description |
|---|---|---|
USERNAME |
Required | GitHub username Default: ${{ github.repository_owner }} |
name: GitHub-Profile-Summary-Cards
description: Generate profile summary cards and commit to default branch
author: vn7n24fzkq
inputs:
USERNAME:
required: true
description: 'GitHub username'
default: ${{ github.repository_owner }}
runs:
using: 'node12'
main: 'dist/index.js'
branding:
icon: 'activity'
color: 'orange'
Action ID: marketplace/peter-evans/find-comment
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/find-comment
Find an issue or pull request comment
| Name | Required | Description |
|---|---|---|
token |
Optional | GITHUB_TOKEN or a repo scoped PAT. Default: ${{ github.token }} |
repository |
Optional | The full name of the repository containing the issue or pull request. Default: ${{ github.repository }} |
issue-number |
Optional | The number of the issue or pull request in which to search. |
comment-author |
Optional | The GitHub user name of the comment author. |
body-includes |
Optional | A string to search for in the body of comments. |
body-regex |
Optional | A regular expression to search for in the body of comments. |
direction |
Optional | Search direction, specified as `first` or `last` Default: first |
nth |
Optional | 0-indexed number, specifying which comment to return if multiple are found |
| Name | Description |
|---|---|
comment-id |
The id of the matching comment found. |
comment-node-id |
The GraphQL node id of the matching comment found. |
comment-body |
The body of the matching comment found. |
comment-author |
The author of the matching comment found. |
name: 'Find Comment'
description: 'Find an issue or pull request comment'
inputs:
token:
description: 'GITHUB_TOKEN or a repo scoped PAT.'
default: ${{ github.token }}
repository:
description: 'The full name of the repository containing the issue or pull request.'
default: ${{ github.repository }}
issue-number:
description: 'The number of the issue or pull request in which to search.'
comment-author:
description: 'The GitHub user name of the comment author.'
body-includes:
description: 'A string to search for in the body of comments.'
body-regex:
description: 'A regular expression to search for in the body of comments.'
direction:
description: 'Search direction, specified as `first` or `last`'
default: first
nth:
description: '0-indexed number, specifying which comment to return if multiple are found'
default: 0
outputs:
comment-id:
description: 'The id of the matching comment found.'
comment-node-id:
description: 'The GraphQL node id of the matching comment found.'
comment-body:
description: 'The body of the matching comment found.'
comment-author:
description: 'The author of the matching comment found.'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'search'
color: 'gray-dark'
Action ID: marketplace/mheap/github-action-issue-to-jira
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-issue-to-jira
Create Jira tickets for GitHub Issues automatically
| Name | Required | Description |
|---|---|---|
jiraHost |
Optional | The URL of your Jira instance e.g. https://myapp.atlassian.net |
jiraUsername |
Optional | The username to authenticate with |
jiraPassword |
Optional | The password for the provided username |
project |
Optional | The project key to create a new issue in |
assignee |
Optional | The default assignee for any created issues |
name: Issue to Jira
description: Create Jira tickets for GitHub Issues automatically
runs:
using: docker
image: Dockerfile
branding:
icon: mail
color: orange
inputs:
jiraHost:
description: "The URL of your Jira instance e.g. https://myapp.atlassian.net"
jiraUsername:
description: "The username to authenticate with"
jiraPassword:
description: "The password for the provided username"
project:
description: "The project key to create a new issue in"
assignee:
description: "The default assignee for any created issues"
Action ID: marketplace/actions/upload-artifact
Author: GitHub
Publisher: actions
Repository: github.com/actions/upload-artifact
Upload a build artifact that can be used by subsequent workflow steps
| Name | Required | Description |
|---|---|---|
name |
Optional | Artifact name Default: artifact |
path |
Required | A file, directory or wildcard pattern that describes what to upload |
if-no-files-found |
Optional | The desired behavior if no files are found using the provided path.
Available Options:
warn: Output a warning but do not fail the action
error: Fail the action with an error message
ignore: Do not output any warnings or errors, the action does not fail
Default: warn |
retention-days |
Optional | Duration after which artifact will expire in days. 0 means using default retention. Minimum 1 day. Maximum 90 days unless changed from the repository settings page. |
compression-level |
Optional | The level of compression for Zlib to be applied to the artifact archive. The value can range from 0 to 9: - 0: No compression - 1: Best speed - 6: Default compression (same as GNU Gzip) - 9: Best compression Higher levels will result in better compression, but will take longer to complete. For large files that are not easily compressed, a value of 0 is recommended for significantly faster uploads.
Default: 6 |
overwrite |
Optional | If true, an artifact with a matching name will be deleted before a new one is uploaded. If false, the action will fail if an artifact for the given name already exists. Does not fail if the artifact does not exist.
Default: false |
include-hidden-files |
Optional | If true, hidden files will be included in the artifact. If false, hidden files will be excluded from the artifact.
Default: false |
| Name | Description |
|---|---|
artifact-id |
A unique identifier for the artifact that was just uploaded. Empty if the artifact upload failed. This ID can be used as input to other APIs to download, delete or get more information about an artifact: https://docs.github.com/en/rest/actions/artifacts |
artifact-url |
A download URL for the artifact that was just uploaded. Empty if the artifact upload failed. This download URL only works for requests Authenticated with GitHub. Anonymous downloads will be prompted to first login. If an anonymous download URL is needed than a short time restricted URL can be generated using the download artifact API: https://docs.github.com/en/rest/actions/artifacts#download-an-artifact This URL will be valid for as long as the artifact exists and the workflow run and repository exists. Once an artifact has expired this URL will no longer work. Common uses cases for such a download URL can be adding download links to artifacts in descriptions or comments on pull requests or issues. |
artifact-digest |
SHA-256 digest for the artifact that was just uploaded. Empty if the artifact upload failed. |
name: 'Upload a Build Artifact'
description: 'Upload a build artifact that can be used by subsequent workflow steps'
author: 'GitHub'
inputs:
name:
description: 'Artifact name'
default: 'artifact'
path:
description: 'A file, directory or wildcard pattern that describes what to upload'
required: true
if-no-files-found:
description: >
The desired behavior if no files are found using the provided path.
Available Options:
warn: Output a warning but do not fail the action
error: Fail the action with an error message
ignore: Do not output any warnings or errors, the action does not fail
default: 'warn'
retention-days:
description: >
Duration after which artifact will expire in days. 0 means using default retention.
Minimum 1 day.
Maximum 90 days unless changed from the repository settings page.
compression-level:
description: >
The level of compression for Zlib to be applied to the artifact archive.
The value can range from 0 to 9:
- 0: No compression
- 1: Best speed
- 6: Default compression (same as GNU Gzip)
- 9: Best compression
Higher levels will result in better compression, but will take longer to complete.
For large files that are not easily compressed, a value of 0 is recommended for significantly faster uploads.
default: '6'
overwrite:
description: >
If true, an artifact with a matching name will be deleted before a new one is uploaded.
If false, the action will fail if an artifact for the given name already exists.
Does not fail if the artifact does not exist.
default: 'false'
include-hidden-files:
description: >
If true, hidden files will be included in the artifact.
If false, hidden files will be excluded from the artifact.
default: 'false'
outputs:
artifact-id:
description: >
A unique identifier for the artifact that was just uploaded. Empty if the artifact upload failed.
This ID can be used as input to other APIs to download, delete or get more information about an artifact: https://docs.github.com/en/rest/actions/artifacts
artifact-url:
description: >
A download URL for the artifact that was just uploaded. Empty if the artifact upload failed.
This download URL only works for requests Authenticated with GitHub. Anonymous downloads will be prompted to first login.
If an anonymous download URL is needed than a short time restricted URL can be generated using the download artifact API: https://docs.github.com/en/rest/actions/artifacts#download-an-artifact
This URL will be valid for as long as the artifact exists and the workflow run and repository exists. Once an artifact has expired this URL will no longer work.
Common uses cases for such a download URL can be adding download links to artifacts in descriptions or comments on pull requests or issues.
artifact-digest:
description: >
SHA-256 digest for the artifact that was just uploaded. Empty if the artifact upload failed.
runs:
using: 'node24'
main: 'dist/upload/index.js'
Action ID: marketplace/julia-actions/julia-treeshake
Author: Ian Butterworth
Publisher: julia-actions
Repository: github.com/julia-actions/julia-treeshake
Run a given test to see if any project dependencies are unused
| Name | Required | Description |
|---|---|---|
test_code |
Optional | Code to run while evaluating dependency usage Default: import Pkg; Pkg.test(julia_args=["--code-coverage=user"]) |
prefix |
Optional | Value inserted in front of the julia command, e.g. for running xvfb-run julia [...] |
project |
Optional | Value passed to the --project flag. The default value is the repository root: "@." Default: @. |
fail_unused_direct_deps |
Optional | Whether to fail the run if unused direct dependencies ar found. Default value: yes Default: yes |
fail_unused_indirect_deps |
Optional | Whether to fail the run if unused direct dependencies ar found. Default value: no Default: no |
name: 'Find unused Julia project dependencies'
description: 'Run a given test to see if any project dependencies are unused'
author: 'Ian Butterworth'
branding:
icon: 'aperture'
color: 'gray-dark'
inputs:
test_code:
description: 'Code to run while evaluating dependency usage'
default: 'import Pkg; Pkg.test(julia_args=["--code-coverage=user"])'
required: false
prefix:
description: 'Value inserted in front of the julia command, e.g. for running xvfb-run julia [...]'
default: ''
required: false
project:
description: 'Value passed to the --project flag. The default value is the repository root: "@."'
default: '@.'
required: false
fail_unused_direct_deps:
description: 'Whether to fail the run if unused direct dependencies ar found. Default value: yes'
default: 'yes'
required: false
fail_unused_indirect_deps:
description: 'Whether to fail the run if unused direct dependencies ar found. Default value: no'
default: 'no'
required: false
runs:
using: 'composite'
steps:
- run: |
# The Julia command that will be executed
julia_cmd=( julia --code-coverage=user --color=yes --project=${{ inputs.project }} -e '${{ inputs.test_code }}' )
# Add the prefix in front of the command if there is one
prefix="${{ inputs.prefix }}"
[[ -n $prefix ]] && julia_cmd=( "$prefix" "${julia_cmd[@]}" )
# Run the Julia command
"${julia_cmd[@]}"
shell: bash
- run: julia --color=yes --project=${{ inputs.project }} "$GITHUB_ACTION_PATH"/treeshake.jl --fail_unused_direct=${{ inputs.fail_unused_direct_deps }} --fail_unused_indirect=${{ inputs.fail_unused_indirect_deps }}
shell: bash
Action ID: marketplace/github/branch-deploy
Author: Grant Birkinbine
Publisher: github
Repository: github.com/github/branch-deploy
Enabling Branch Deployments through IssueOps with GitHub Actions
| Name | Required | Description |
|---|---|---|
github_token |
Required | The GitHub token used to create an authenticated client - Provided for you by default! Default: ${{ github.token }} |
status |
Required | The status of the GitHub Actions - For use in the post run workflow - Provided for you by default! Default: ${{ job.status }} |
environment |
Optional | The name of the default environment to deploy to. Example: by default, if you type `.deploy`, it will assume "production" as the default environment Default: production |
environment_targets |
Optional | Optional (or additional) target environments to select for use with deployments. Example, "production,development,staging". Example usage: `.deploy to development`, `.deploy to production`, `.deploy to staging` Default: production,development,staging |
draft_permitted_targets |
Optional | Optional environments which can allow "draft" pull requests to be deployed. By default, this input option is empty and no environments allow deployments sourced from a pull request in a "draft" state. Examples: "development,staging" |
environment_urls |
Optional | Optional target environment URLs to use with deployments. This input option is a mapping of environment names to URLs and the environment names must match the "environment_targets" input option. This option is a comma separated list with pipes (|) separating the environment from the URL. Note: "disabled" is a special keyword to disable an environment url if you enable this option. Format: "<environment1>|<url1>,<environment2>|<url2>,etc" Example: "production|https://myapp.com,development|https://dev.myapp.com,staging|disabled" |
environment_url_in_comment |
Optional | If the environment_url detected in the deployment should be appended to the successful deployment comment or not. Examples: "true" or "false" Default: true |
production_environments |
Optional | A comma separated list of environments that should be treated as "production". GitHub defines "production" as an environment that end users or systems interact with. Example: "production,production-eu". By default, GitHub will set the "production_environment" to "true" if the environment name is "production". This option allows you to override that behavior so you can use "prod", "prd", "main", "production-eu", etc. as your production environment name. ref: https://github.com/github/branch-deploy/issues/208 Default: production |
reaction |
Optional | If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket" or "eyes" Default: eyes |
trigger |
Optional | The string to look for in comments as an IssueOps trigger. Example: ".deploy" Default: .deploy |
noop_trigger |
Optional | The string to look for in comments as an IssueOps noop trigger. Example: ".noop" Default: .noop |
lock_trigger |
Optional | The string to look for in comments as an IssueOps lock trigger. Used for locking branch deployments on a specific branch. Example: ".lock" Default: .lock |
unlock_trigger |
Optional | The string to look for in comments as an IssueOps unlock trigger. Used for unlocking branch deployments. Example: ".unlock" Default: .unlock |
help_trigger |
Optional | The string to look for in comments as an IssueOps help trigger. Example: ".help" Default: .help |
lock_info_alias |
Optional | An alias or shortcut to get details about the current lock (if it exists) Example: ".info" Default: .wcid |
permissions |
Required | The allowed GitHub permissions an actor can have to invoke IssueOps commands - Example: "write,admin" Default: write,admin |
commit_verification |
Optional | Whether or not to enforce commit verification before a deployment can continue. Default is "false" Default: false |
param_separator |
Optional | The separator to use for parsing parameters in comments in deployment requests. Parameters will are saved as outputs and can be used in subsequent steps Default: | |
global_lock_flag |
Optional | The flag to pass into the lock command to lock all environments. Example: "--global" Default: --global |
stable_branch |
Optional | The name of a stable branch to deploy to (rollbacks). Example: "main" Default: main |
update_branch |
Optional | Determine how you want this Action to handle "out-of-date" branches. Available options: "disabled", "warn", "force". "disabled" means that the Action will not care if a branch is out-of-date. "warn" means that the Action will warn the user that a branch is out-of-date and exit without deploying. "force" means that the Action will force update the branch. Note: The "force" option is not recommended due to Actions not being able to re-run CI on commits originating from Actions itself Default: warn |
outdated_mode |
Optional | The mode to use for determining if a branch is up-to-date or not before allowing deployments. This option is closely related to the "update_branch" input option above. There are three available modes to choose from "pr_base", "default_branch", or "strict". The default is "strict" to help ensure that deployments are using the most up-to-date code. Please see the docs/outdated_mode.md document for more details. Default: strict |
required_contexts |
Optional | Manually enforce commit status checks before a deployment can continue. Only use this option if you wish to manually override the settings you have configured for your branch protection settings for your GitHub repository. Default is "false" - Example value: "context1,context2,context3" - In most cases you will not need to touch this option Default: false |
skip_ci |
Optional | A comma separated list of environments that will not use passing CI as a requirement for deployment. Use this option to explicitly bypass branch protection settings for a certain environment in your repository. Default is an empty string "" - Example: "development,staging" |
checks |
Optional | This input defines how the branch-deploy Action will handle the status of CI checks on your PR/branch before deployments can continue. `"all"` requires that all CI checks must pass in order for a deployment to be triggered. `"required"` only waits for required CI checks to be passing. You can also pass in the names of your CI jobs in a comma separated list. View the documentation (docs/checks.md) for more details. Default: all |
ignored_checks |
Optional | A comma separated list of checks that will be ignored when determining if a deployment can continue. This setting allows you to skip failing, pending, or incomplete checks regardless of the `checks` setting above. Example: "lint,markdown-formatting,update-pr-label". View the documentation (docs/checks.md) for more details. |
skip_reviews |
Optional | A comma separated list of environment that will not use reviews/approvals as a requirement for deployment. Use this options to explicitly bypass branch protection settings for a certain environment in your repository. Default is an empty string "" - Example: "development,staging" |
allow_forks |
Optional | Allow branch deployments to run on repository forks. If you want to harden your workflows, this option can be set to false. Default is "true" Default: true |
admins |
Optional | A comma separated list of GitHub usernames or teams that should be considered admins by this Action. Admins can deploy pull requests without the need for branch protection approvals. Example: "monalisa,octocat,my-org/my-team" Default: false |
admins_pat |
Optional | A GitHub personal access token with "read:org" scopes. This is only needed if you are using the "admins" option with a GitHub org team. For example: "my-org/my-team" Default: false |
merge_deploy_mode |
Optional | This is an advanced option that is an alternate workflow bundled into this Action. You can control how merge commits are handled when a PR is merged into your repository's default branch. If the merge commit SHA matches the latest deployment for the same environment, then the 'continue' output will be set to 'false' which indicates that a deployment should not be performed again as the latest deployment is identical. If the merge commit SHA does not match the latest deployment for the same environment, then the 'continue' output will be set to 'true' which indicates that a deployment should be performed. With this option, the 'environment' output is also set for subsequent steps to reference Default: false |
unlock_on_merge_mode |
Optional | This is an advanced option that is an alternate workflow bundled into this Action. You can optionally use this mode in a custom workflow to automatically release all locks that came from a pull request when the pull request is merged. This is useful if you want to ensure that locks are not left behind when a pull request is merged. Default: false |
skip_completing |
Optional | If set to "true", skip the process of completing a deployment. You must manually create a deployment status after the deployment is complete. Default is "false" Default: false |
deploy_message_path |
Optional | The path to a markdown file which is used as a template for custom deployment messages. Example: ".github/deployment_message.md" Default: .github/deployment_message.md |
sticky_locks |
Optional | If set to "true", locks will not be released after a deployment run completes. This applies to both successful, and failed deployments.Sticky locks are also known as "hubot style deployment locks". They will persist until they are manually released by a user, or if you configure another workflow with the "unlock on merge" mode to remove them automatically on PR merge. Default: false |
sticky_locks_for_noop |
Optional | If set to "true", then sticky_locks will also be used for noop deployments. This can be useful in some cases but it often leads to locks being left behind when users test noop deployments. Default: false |
allow_sha_deployments |
Optional | If set to "true", then you can deploy a specific sha instead of a branch. Example: ".deploy 1234567890abcdef1234567890abcdef12345678 to production" - This is dangerous and potentially unsafe, view the docs to learn more: https://github.com/github/branch-deploy/blob/main/docs/sha-deployments.md Default: false |
disable_naked_commands |
Optional | If set to "true", then naked commands will be disabled. Example: ".deploy" will not trigger a deployment. Instead, you must use ".deploy to production" to trigger a deployment. This is useful if you want to prevent accidental deployments from happening. Read more about naked commands here: https://github.com/github/branch-deploy/blob/main/docs/naked-commands.md Default: false |
successful_deploy_labels |
Optional | A comma separated list of labels to add to the pull request when a deployment is successful. Example: "deployed,success" |
successful_noop_labels |
Optional | A comma separated list of labels to add to the pull request when a noop deployment is successful. Example: "noop,success" |
failed_deploy_labels |
Optional | A comma separated list of labels to add to the pull request when a deployment fails. Example: "failed,deploy-failed" |
failed_noop_labels |
Optional | A comma separated list of labels to add to the pull request when a noop deployment fails. Example: "failed,noop-failed" |
skip_successful_noop_labels_if_approved |
Optional | Whether or not the post run logic should skip adding successful noop labels if the pull request is approved. This can be useful if you add a label such as "ready-for-review" after a .noop completes but want to skip adding that label in situations where the pull request is already approved. Default: false |
skip_successful_deploy_labels_if_approved |
Optional | Whether or not the post run logic should skip adding successful deploy labels if the pull request is approved. This can be useful if you add a label such as "ready-for-review" after a .deploy completes but want to skip adding that label in situations where the pull request is already approved. Default: false |
enforced_deployment_order |
Optional | A comma separated list of environments that must be deployed in a specific order. Example: `"development,staging,production"`. If this is set then you cannot deploy to latter environments unless the former ones have a successful and active deployment on the latest commit first. |
use_security_warnings |
Optional | Whether or not to leave security related warnings in log messages during deployments. Default is "true" Default: true |
allow_non_default_target_branch_deployments |
Optional | Whether or not to allow deployments of pull requests that target a branch other than the default branch (aka stable branch) as their merge target. By default, this Action would reject the deployment of a branch named "feature-branch" if it was targeting "foo" instead of "main" (or whatever your default branch is). This option allows you to override that behavior and be able to deploy any branch in your repository regardless of the target branch. This option is potentially unsafe and should be used with caution as most default branches contain branch protection rules. Often times non-default branches do not contain these same branch protection rules. Follow along in this issue thread to learn more https://github.com/github/branch-deploy/issues/340 Default: false |
deployment_confirmation |
Optional | Whether or not to require an additional confirmation before a deployment can continue. Default is "false". If your project requires elevated security, it is highly recommended to enable this option - especially in open source projects where you might be deploying forks. docs/deployment-confirmation.md Default: false |
deployment_confirmation_timeout |
Optional | The number of seconds to wait for a deployment confirmation before timing out. Default is "60" seconds (1 minute). Default: 60 |
| Name | Description |
|---|---|
continue |
The string "true" if the deployment should continue, otherwise empty - Use this to conditionally control if your deployment should proceed or not |
triggered |
The string "true" if the trigger was found, otherwise the string "false" |
comment_body |
The comment body |
issue_number |
The issue number of the pull request (or issue) that was commented on |
actor |
The GitHub handle of the actor that invoked the IssueOps command |
environment |
The environment that has been selected for a deployment |
params |
The raw parameters that were passed into the deployment command (see param_separator) |
parsed_params |
A stringified JSON object of the parsed parameters that were passed into the deployment command |
noop |
The string "true" if the noop trigger was found, otherwise the string "false" - Use this to conditionally control whether your deployment runs as a noop or not |
sha |
The sha of the branch to be deployed |
default_branch_tree_sha |
The sha of the default branch tree (useful for subsequent workflow steps if they need to do commit comparisons) |
base_ref |
The base ref that the pull request is merging into |
ref |
The ref (branch or sha) to use with deployment |
comment_id |
The comment id which triggered this deployment |
type |
The type of trigger that was detected (examples: deploy, lock, unlock, lock-info-alias, help) |
fork |
The string "true" if the pull request is a fork, otherwise "false" |
fork_ref |
The true ref of the fork |
fork_label |
The API label field returned for the fork |
fork_checkout |
The console command presented in the GitHub UI to checkout a given fork locally |
fork_full_name |
The full name of the fork in "org/repo" format |
deployment_id |
The ID of the deployment created by running this action |
environment_url |
The environment URL detected and used for the deployment (sourced from the environment_urls input) |
initial_reaction_id |
The reaction id for the initial reaction on the trigger comment |
initial_comment_id |
The comment id for the "Deployment Triggered 🚀" comment created by the action |
actor_handle |
The handle of the user who triggered the action |
global_lock_claimed |
The string "true" if the global lock was claimed |
global_lock_released |
The string "true" if the global lock was released |
unlocked_environments |
Only exposed when using the "unlock on merge" mode - This output variable will contain a comma separated list of the environments that were unlocked |
sha_deployment |
If "allow_sha_deployments" is enabled, and a sha deployment is performed instead of a branch deployment, this output variable will contain the sha that was deployed. Otherwise, this output variable will be empty |
review_decision |
The pull request review status. Can be one of a few values - examples: APPROVED, REVIEW_REQUIRED, CHANGES_REQUESTED, skip_reviews,, null |
is_outdated |
The string "true" if the branch is outdated, otherwise "false" |
merge_state_status |
The status of the merge state. Can be one of a few values - examples: "DIRTY", "DRAFT", "CLEAN", etc |
commit_status |
The status of the commit. Can be one of a few values - examples: "SUCCESS", null, "skip_ci", "PENDING", "FAILURE" etc |
approved_reviews_count |
The number of approved reviews on the pull request |
needs_to_be_deployed |
A comma separated list of environments that need successful and active deployments before the current environment (that was requested) can be deployed. This output is tied to the "enforced_deployment_order" input option. |
commit_verified |
The string "true" if the commit has a verified signature, otherwise "false" |
total_seconds |
The total number of seconds that the deployment took to complete (Integer) |
non_default_target_branch_used |
The string "true" if the pull request is targeting a branch other than the default branch (aka stable branch) for the merge, otherwise unset |
name: "branch-deploy"
description: "Enabling Branch Deployments through IssueOps with GitHub Actions"
author: "Grant Birkinbine"
branding:
icon: 'git-branch'
color: 'gray-dark'
inputs:
github_token:
description: The GitHub token used to create an authenticated client - Provided for you by default!
default: ${{ github.token }}
required: true
status:
description: The status of the GitHub Actions - For use in the post run workflow - Provided for you by default!
default: ${{ job.status }}
required: true
environment:
description: 'The name of the default environment to deploy to. Example: by default, if you type `.deploy`, it will assume "production" as the default environment'
required: false
default: "production"
environment_targets:
description: 'Optional (or additional) target environments to select for use with deployments. Example, "production,development,staging". Example usage: `.deploy to development`, `.deploy to production`, `.deploy to staging`'
required: false
default: "production,development,staging"
draft_permitted_targets:
description: 'Optional environments which can allow "draft" pull requests to be deployed. By default, this input option is empty and no environments allow deployments sourced from a pull request in a "draft" state. Examples: "development,staging"'
required: false
default: ""
environment_urls:
description: 'Optional target environment URLs to use with deployments. This input option is a mapping of environment names to URLs and the environment names must match the "environment_targets" input option. This option is a comma separated list with pipes (|) separating the environment from the URL. Note: "disabled" is a special keyword to disable an environment url if you enable this option. Format: "<environment1>|<url1>,<environment2>|<url2>,etc" Example: "production|https://myapp.com,development|https://dev.myapp.com,staging|disabled"'
required: false
default: ""
environment_url_in_comment:
description: 'If the environment_url detected in the deployment should be appended to the successful deployment comment or not. Examples: "true" or "false"'
required: false
default: "true"
production_environments:
description: 'A comma separated list of environments that should be treated as "production". GitHub defines "production" as an environment that end users or systems interact with. Example: "production,production-eu". By default, GitHub will set the "production_environment" to "true" if the environment name is "production". This option allows you to override that behavior so you can use "prod", "prd", "main", "production-eu", etc. as your production environment name. ref: https://github.com/github/branch-deploy/issues/208'
required: false
default: "production"
reaction:
description: 'If set, the specified emoji "reaction" is put on the comment to indicate that the trigger was detected. For example, "rocket" or "eyes"'
required: false
default: "eyes"
trigger:
description: 'The string to look for in comments as an IssueOps trigger. Example: ".deploy"'
required: false
default: ".deploy"
noop_trigger:
description: 'The string to look for in comments as an IssueOps noop trigger. Example: ".noop"'
required: false
default: ".noop"
lock_trigger:
description: 'The string to look for in comments as an IssueOps lock trigger. Used for locking branch deployments on a specific branch. Example: ".lock"'
required: false
default: ".lock"
unlock_trigger:
description: 'The string to look for in comments as an IssueOps unlock trigger. Used for unlocking branch deployments. Example: ".unlock"'
required: false
default: ".unlock"
help_trigger:
description: 'The string to look for in comments as an IssueOps help trigger. Example: ".help"'
required: false
default: ".help"
lock_info_alias:
description: 'An alias or shortcut to get details about the current lock (if it exists) Example: ".info"'
required: false
default: ".wcid"
permissions:
description: 'The allowed GitHub permissions an actor can have to invoke IssueOps commands - Example: "write,admin"'
required: true
default: "write,admin"
commit_verification:
description: 'Whether or not to enforce commit verification before a deployment can continue. Default is "false"'
required: false
default: "false"
param_separator:
description: 'The separator to use for parsing parameters in comments in deployment requests. Parameters will are saved as outputs and can be used in subsequent steps'
required: false
default: "|"
global_lock_flag:
description: 'The flag to pass into the lock command to lock all environments. Example: "--global"'
required: false
default: "--global"
stable_branch:
description: 'The name of a stable branch to deploy to (rollbacks). Example: "main"'
required: false
default: "main"
update_branch:
description: 'Determine how you want this Action to handle "out-of-date" branches. Available options: "disabled", "warn", "force". "disabled" means that the Action will not care if a branch is out-of-date. "warn" means that the Action will warn the user that a branch is out-of-date and exit without deploying. "force" means that the Action will force update the branch. Note: The "force" option is not recommended due to Actions not being able to re-run CI on commits originating from Actions itself'
required: false
default: "warn"
outdated_mode:
description: 'The mode to use for determining if a branch is up-to-date or not before allowing deployments. This option is closely related to the "update_branch" input option above. There are three available modes to choose from "pr_base", "default_branch", or "strict". The default is "strict" to help ensure that deployments are using the most up-to-date code. Please see the docs/outdated_mode.md document for more details.'
required: false
default: "strict"
required_contexts:
description: 'Manually enforce commit status checks before a deployment can continue. Only use this option if you wish to manually override the settings you have configured for your branch protection settings for your GitHub repository. Default is "false" - Example value: "context1,context2,context3" - In most cases you will not need to touch this option'
required: false
default: "false"
skip_ci:
description: 'A comma separated list of environments that will not use passing CI as a requirement for deployment. Use this option to explicitly bypass branch protection settings for a certain environment in your repository. Default is an empty string "" - Example: "development,staging"'
required: false
default: ""
checks:
description: 'This input defines how the branch-deploy Action will handle the status of CI checks on your PR/branch before deployments can continue. `"all"` requires that all CI checks must pass in order for a deployment to be triggered. `"required"` only waits for required CI checks to be passing. You can also pass in the names of your CI jobs in a comma separated list. View the documentation (docs/checks.md) for more details.'
required: false
default: "all"
ignored_checks:
description: 'A comma separated list of checks that will be ignored when determining if a deployment can continue. This setting allows you to skip failing, pending, or incomplete checks regardless of the `checks` setting above. Example: "lint,markdown-formatting,update-pr-label". View the documentation (docs/checks.md) for more details.'
required: false
default: ""
skip_reviews:
description: 'A comma separated list of environment that will not use reviews/approvals as a requirement for deployment. Use this options to explicitly bypass branch protection settings for a certain environment in your repository. Default is an empty string "" - Example: "development,staging"'
required: false
default: ""
allow_forks:
description: 'Allow branch deployments to run on repository forks. If you want to harden your workflows, this option can be set to false. Default is "true"'
required: false
default: "true"
admins:
description: 'A comma separated list of GitHub usernames or teams that should be considered admins by this Action. Admins can deploy pull requests without the need for branch protection approvals. Example: "monalisa,octocat,my-org/my-team"'
required: false
default: "false"
admins_pat:
description: 'A GitHub personal access token with "read:org" scopes. This is only needed if you are using the "admins" option with a GitHub org team. For example: "my-org/my-team"'
required: false
default: "false"
merge_deploy_mode:
description: This is an advanced option that is an alternate workflow bundled into this Action. You can control how merge commits are handled when a PR is merged into your repository's default branch. If the merge commit SHA matches the latest deployment for the same environment, then the 'continue' output will be set to 'false' which indicates that a deployment should not be performed again as the latest deployment is identical. If the merge commit SHA does not match the latest deployment for the same environment, then the 'continue' output will be set to 'true' which indicates that a deployment should be performed. With this option, the 'environment' output is also set for subsequent steps to reference
required: false
default: "false"
unlock_on_merge_mode:
description: This is an advanced option that is an alternate workflow bundled into this Action. You can optionally use this mode in a custom workflow to automatically release all locks that came from a pull request when the pull request is merged. This is useful if you want to ensure that locks are not left behind when a pull request is merged.
required: false
default: "false"
skip_completing:
description: 'If set to "true", skip the process of completing a deployment. You must manually create a deployment status after the deployment is complete. Default is "false"'
required: false
default: "false"
deploy_message_path:
description: 'The path to a markdown file which is used as a template for custom deployment messages. Example: ".github/deployment_message.md"'
required: false
default: ".github/deployment_message.md"
sticky_locks:
description: 'If set to "true", locks will not be released after a deployment run completes. This applies to both successful, and failed deployments.Sticky locks are also known as "hubot style deployment locks". They will persist until they are manually released by a user, or if you configure another workflow with the "unlock on merge" mode to remove them automatically on PR merge.'
required: false
default: "false"
sticky_locks_for_noop:
description: 'If set to "true", then sticky_locks will also be used for noop deployments. This can be useful in some cases but it often leads to locks being left behind when users test noop deployments.'
required: false
default: "false"
allow_sha_deployments:
description: 'If set to "true", then you can deploy a specific sha instead of a branch. Example: ".deploy 1234567890abcdef1234567890abcdef12345678 to production" - This is dangerous and potentially unsafe, view the docs to learn more: https://github.com/github/branch-deploy/blob/main/docs/sha-deployments.md'
required: false
default: "false"
disable_naked_commands:
description: 'If set to "true", then naked commands will be disabled. Example: ".deploy" will not trigger a deployment. Instead, you must use ".deploy to production" to trigger a deployment. This is useful if you want to prevent accidental deployments from happening. Read more about naked commands here: https://github.com/github/branch-deploy/blob/main/docs/naked-commands.md'
required: false
default: "false"
successful_deploy_labels:
description: 'A comma separated list of labels to add to the pull request when a deployment is successful. Example: "deployed,success"'
required: false
default: ""
successful_noop_labels:
description: 'A comma separated list of labels to add to the pull request when a noop deployment is successful. Example: "noop,success"'
required: false
default: ""
failed_deploy_labels:
description: 'A comma separated list of labels to add to the pull request when a deployment fails. Example: "failed,deploy-failed"'
required: false
default: ""
failed_noop_labels:
description: 'A comma separated list of labels to add to the pull request when a noop deployment fails. Example: "failed,noop-failed"'
required: false
default: ""
skip_successful_noop_labels_if_approved:
description: 'Whether or not the post run logic should skip adding successful noop labels if the pull request is approved. This can be useful if you add a label such as "ready-for-review" after a .noop completes but want to skip adding that label in situations where the pull request is already approved.'
required: false
default: "false"
skip_successful_deploy_labels_if_approved:
description: 'Whether or not the post run logic should skip adding successful deploy labels if the pull request is approved. This can be useful if you add a label such as "ready-for-review" after a .deploy completes but want to skip adding that label in situations where the pull request is already approved.'
required: false
default: "false"
enforced_deployment_order:
description: 'A comma separated list of environments that must be deployed in a specific order. Example: `"development,staging,production"`. If this is set then you cannot deploy to latter environments unless the former ones have a successful and active deployment on the latest commit first.'
required: false
default: ""
use_security_warnings:
description: 'Whether or not to leave security related warnings in log messages during deployments. Default is "true"'
required: false
default: "true"
allow_non_default_target_branch_deployments:
description: 'Whether or not to allow deployments of pull requests that target a branch other than the default branch (aka stable branch) as their merge target. By default, this Action would reject the deployment of a branch named "feature-branch" if it was targeting "foo" instead of "main" (or whatever your default branch is). This option allows you to override that behavior and be able to deploy any branch in your repository regardless of the target branch. This option is potentially unsafe and should be used with caution as most default branches contain branch protection rules. Often times non-default branches do not contain these same branch protection rules. Follow along in this issue thread to learn more https://github.com/github/branch-deploy/issues/340'
required: false
default: "false"
deployment_confirmation:
description: 'Whether or not to require an additional confirmation before a deployment can continue. Default is "false". If your project requires elevated security, it is highly recommended to enable this option - especially in open source projects where you might be deploying forks. docs/deployment-confirmation.md'
required: false
default: "false"
deployment_confirmation_timeout:
description: 'The number of seconds to wait for a deployment confirmation before timing out. Default is "60" seconds (1 minute).'
required: false
default: "60"
outputs:
continue:
description: 'The string "true" if the deployment should continue, otherwise empty - Use this to conditionally control if your deployment should proceed or not'
triggered:
description: 'The string "true" if the trigger was found, otherwise the string "false"'
comment_body:
description: The comment body
issue_number:
description: The issue number of the pull request (or issue) that was commented on
actor:
description: The GitHub handle of the actor that invoked the IssueOps command
environment:
description: The environment that has been selected for a deployment
params:
description: The raw parameters that were passed into the deployment command (see param_separator)
parsed_params:
description: A stringified JSON object of the parsed parameters that were passed into the deployment command
noop:
description: 'The string "true" if the noop trigger was found, otherwise the string "false" - Use this to conditionally control whether your deployment runs as a noop or not'
sha:
description: The sha of the branch to be deployed
default_branch_tree_sha:
description: The sha of the default branch tree (useful for subsequent workflow steps if they need to do commit comparisons)
base_ref:
description: The base ref that the pull request is merging into
ref:
description: The ref (branch or sha) to use with deployment
comment_id:
description: The comment id which triggered this deployment
type:
description: "The type of trigger that was detected (examples: deploy, lock, unlock, lock-info-alias, help)"
fork:
description: 'The string "true" if the pull request is a fork, otherwise "false"'
fork_ref:
description: 'The true ref of the fork'
fork_label:
description: 'The API label field returned for the fork'
fork_checkout:
description: 'The console command presented in the GitHub UI to checkout a given fork locally'
fork_full_name:
description: 'The full name of the fork in "org/repo" format'
deployment_id:
description: The ID of the deployment created by running this action
environment_url:
description: The environment URL detected and used for the deployment (sourced from the environment_urls input)
initial_reaction_id:
description: The reaction id for the initial reaction on the trigger comment
initial_comment_id:
description: The comment id for the "Deployment Triggered 🚀" comment created by the action
actor_handle:
description: The handle of the user who triggered the action
global_lock_claimed:
description: 'The string "true" if the global lock was claimed'
global_lock_released:
description: 'The string "true" if the global lock was released'
unlocked_environments:
description: 'Only exposed when using the "unlock on merge" mode - This output variable will contain a comma separated list of the environments that were unlocked'
sha_deployment:
description: 'If "allow_sha_deployments" is enabled, and a sha deployment is performed instead of a branch deployment, this output variable will contain the sha that was deployed. Otherwise, this output variable will be empty'
review_decision:
description: 'The pull request review status. Can be one of a few values - examples: APPROVED, REVIEW_REQUIRED, CHANGES_REQUESTED, skip_reviews,, null'
is_outdated:
description: 'The string "true" if the branch is outdated, otherwise "false"'
merge_state_status:
description: 'The status of the merge state. Can be one of a few values - examples: "DIRTY", "DRAFT", "CLEAN", etc'
commit_status:
description: 'The status of the commit. Can be one of a few values - examples: "SUCCESS", null, "skip_ci", "PENDING", "FAILURE" etc'
approved_reviews_count:
description: 'The number of approved reviews on the pull request'
needs_to_be_deployed:
description: 'A comma separated list of environments that need successful and active deployments before the current environment (that was requested) can be deployed. This output is tied to the "enforced_deployment_order" input option.'
commit_verified:
description: 'The string "true" if the commit has a verified signature, otherwise "false"'
total_seconds:
description: 'The total number of seconds that the deployment took to complete (Integer)'
non_default_target_branch_used:
description: 'The string "true" if the pull request is targeting a branch other than the default branch (aka stable branch) for the merge, otherwise unset'
runs:
using: "node24"
main: "dist/index.js"
post: "dist/index.js"
Action ID: marketplace/amirisback/consumable-code-movie-tmdb-api
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/consumable-code-movie-tmdb-api
Retrofit has been Handled, Consumable code for request Public API (TMDb API)
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Consumable Code Movie TMDB API'
description: 'Retrofit has been Handled, Consumable code for request Public API (TMDb API)'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/islamic-javanese-calendar
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/islamic-javanese-calendar
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'FrogoKickStartAndroid'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/consumable-code-news-api
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/consumable-code-news-api
Retrofit has been Handled !! || Consumable code for request API (News API)
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'consumable-code-news-api'
description: 'Retrofit has been Handled !! || Consumable code for request API (News API)'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/appleboy/setup-go
Author: GitHub
Publisher: appleboy
Repository: github.com/appleboy/setup-go
Setup a Go environment and add it to the PATH, additionally providing proxy support
| Name | Required | Description |
|---|---|---|
go-version |
Optional | The Go version to download (if necessary) and use. Example: 1.9.3 Default: 1.10 |
version |
Optional | Deprecated. Use go-version instead. Will not be supported after October 1, 2019 |
name: 'Setup Go environment'
description: 'Setup a Go environment and add it to the PATH, additionally providing proxy support'
author: 'GitHub'
inputs:
go-version:
description: 'The Go version to download (if necessary) and use. Example: 1.9.3'
default: '1.10'
# Deprecated option, do not use. Will not be supported after October 1, 2019
version:
description: 'Deprecated. Use go-version instead. Will not be supported after October 1, 2019'
deprecationMessage: 'The version property will not be supported after October 1, 2019. Use go-version instead'
runs:
using: 'node12'
main: 'lib/setup-go.js'
Action ID: marketplace/azure/LAMPStack-Azure-App-Service
Author: Unknown
Publisher: azure
Repository: github.com/azure/LAMPStack-Azure-App-Service
This action helps deploy LAMP stack to Azure App service
| Name | Required | Description |
|---|---|---|
client-id |
Required | Client id to login to azure |
tenant-id |
Required | Tenant id to login to azure |
subscription-id |
Required | Subscription id to be used with your az login |
resource-group-name |
Required | Resource group to deploy your resources to |
admin-username |
Required | Admin username to login to app |
admin-password |
Required | Admin password to login to app and mySql |
name: 'Deploy LAMP stack on Azure App service - Quickstart Template'
description: 'This action helps deploy LAMP stack to Azure App service'
branding:
icon: 'play-circle'
color: 'blue'
inputs:
client-id:
description: 'Client id to login to azure'
required: true
tenant-id:
description: 'Tenant id to login to azure'
required: true
subscription-id:
description: 'Subscription id to be used with your az login'
required: true
resource-group-name:
description: 'Resource group to deploy your resources to'
required: true
admin-username:
description: 'Admin username to login to app'
required: true
admin-password:
description: 'Admin password to login to app and mySql'
required: true
runs:
using: 'composite'
steps:
- name: 'Checkout master'
uses: actions/checkout@v3
- name: az cli login
uses: azure/login@v1
with:
client-id: ${{ inputs.client-id }}
tenant-id: ${{ inputs.tenant-id }}
subscription-id: ${{ inputs.subscription-id }}
enable-AzPSSession: true
- name: 'Az deploy - LAMP App'
uses: azure/arm-deploy@v1
with:
subscriptionId: ${{ inputs.subscription-id }}
resourceGroupName: ${{ inputs.resource-group-name }}
template: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/demos/lamp-app/azuredeploy.json
parameters: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/demos/lamp-app/azuredeploy.parameters.json
authenticationType=password
mySqlPassword=${{ inputs.admin-password }}
storageAccountNamePrefix=lampappsa
dnsLabelPrefix=dnslabelvm
vmSize=Standard_D2as_v4
adminUsername=${{ inputs.admin-username }}
adminPasswordOrKey=${{inputs.admin-password}}
failOnStdErr: false
- name: Fetch deployment record - Run Azure PowerShell inline script
uses: azure/powershell@v1
with:
inlineScript: |
Get-AzResourceGroupDeploymentOperation -ResourceGroupName ${{ inputs.resource-group-name }} -DeploymentName "azuredeploy"
azPSVersion: "latest"
Action ID: marketplace/Flow-Scanner/lightning-flow-scanner
Author: Ruben Halman
Publisher: Flow-Scanner
Repository: github.com/Flow-Scanner/lightning-flow-scanner
Scan Salesforce Flows for best practices, security, and performance issues.
| Name | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Optional | GitHub token with `contents:read` and `security-events:write` permissions. Optional — defaults to the built-in `github.token` if not provided. Default: ${{ github.token }} |
threshold |
Optional | Minimum severity to fail the action in **table** mode (note, warning, error). **Ignored in `sarif` mode** — any result fails the action. Optional — no default. |
outputMode |
Optional | Output format:
- `sarif` (default): Uploads to GitHub Code Scanning. **Fails on any result**.
- `table`: Logs results in console. Fails only if severity >= `threshold`.
Default: sarif |
| Name | Description |
|---|---|
sarifPath |
Path to the generated SARIF file (only in `sarif` mode) |
scanResults |
Array of scan results in table format (only in `table` mode) |
name: Lightning Flow Scan
description: Scan Salesforce Flows for best practices, security, and performance issues.
branding:
icon: zap
color: purple
inputs:
GITHUB_TOKEN:
description: GitHub token with `contents:read` and `security-events:write` permissions. Optional — defaults to the built-in `github.token` if not provided.
required: false
default: ${{ github.token }}
threshold:
description: |
Minimum severity to fail the action in **table** mode (note, warning, error).
**Ignored in `sarif` mode** — any result fails the action.
Optional — no default.
required: false
outputMode:
description: |
Output format:
- `sarif` (default): Uploads to GitHub Code Scanning. **Fails on any result**.
- `table`: Logs results in console. Fails only if severity >= `threshold`.
required: false
default: sarif
outputs:
sarifPath:
description: Path to the generated SARIF file (only in `sarif` mode)
scanResults:
description: Array of scan results in table format (only in `table` mode)
runs:
using: node20
main: packages/action/dist/index.js
author: Ruben Halman
Action ID: marketplace/docker/setup-compose-action
Author: docker
Publisher: docker
Repository: github.com/docker/setup-compose-action
Set up Docker Compose
| Name | Required | Description |
|---|---|---|
version |
Optional | Compose version. (eg. v2.32.4) |
cache-binary |
Optional | Cache compose binary to GitHub Actions cache backend Default: true |
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Docker Setup Compose'
description: 'Set up Docker Compose'
author: 'docker'
branding:
icon: 'anchor'
color: 'blue'
inputs:
version:
description: 'Compose version. (eg. v2.32.4)'
required: false
cache-binary:
description: 'Cache compose binary to GitHub Actions cache backend'
default: 'true'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
post: 'dist/index.js'
Action ID: marketplace/rhysd/codecov-action
Author: Ibrahim Ali <@ibrahim0814> | Codecov
Publisher: rhysd
Repository: github.com/rhysd/codecov-action
GitHub Action that uploads coverage reports for your repository to codecov.io
| Name | Required | Description |
|---|---|---|
name |
Optional | User defined upload name. Visible in Codecov UI |
token |
Optional | Repository upload token - get it from codecov.io. Required only for private repositories |
file |
Optional | Path to coverage file to upload |
flags |
Optional | Flag upload to group coverage metrics (e.g. unittests | integration | ui,chrome) |
fail_ci_if_error |
Optional | Specify whether or not CI build should fail if Codecov runs into an error during upload |
name: 'Codecov'
description: 'GitHub Action that uploads coverage reports for your repository to codecov.io'
author: 'Ibrahim Ali <@ibrahim0814> | Codecov'
inputs:
name:
description: 'User defined upload name. Visible in Codecov UI'
required: false
token:
description: 'Repository upload token - get it from codecov.io. Required only for private repositories'
required: false
file:
description: 'Path to coverage file to upload'
required: false
flags:
description: 'Flag upload to group coverage metrics (e.g. unittests | integration | ui,chrome)'
required: false
fail_ci_if_error:
description: 'Specify whether or not CI build should fail if Codecov runs into an error during upload'
required: false
branding:
color: 'red'
icon: 'umbrella'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/amirisback/kick-start-android
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/kick-start-android
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'FrogoKickStartAndroid'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/aml-run
Author: azure/gh-aml
Publisher: azure
Repository: github.com/azure/aml-run
Submit a run to an Azure Machine Learning Workspace with this GitHub Action
| Name | Required | Description |
|---|---|---|
azure_credentials |
Required | Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS |
parameters_file |
Required | JSON file including the parameters of the run. Default: run.json |
| Name | Description |
|---|---|
experiment_name |
Name of the experiment of the run |
run_id |
ID of the run |
run_url |
URL to the run in the Azure Machine Learning Studio |
run_metrics |
Metrics of the run (will only be provided if wait_for_completion is set to True) |
run_metrics_markdown |
Metrics of the run formatted as markdown table (will only be provided if wait_for_completion is set to True) |
published_pipeline_id |
Id of the publised pipeline (will only be provided if you submitted a pipeline and pipeline_publish is set to True) |
published_pipeline_status |
Status of the publised pipeline (will only be provided if you submitted a pipeline and pipeline_publish is set to True) |
published_pipeline_endpoint |
Endpoint of the publised pipeline (will only be provided if you submitted a pipeline and pipeline_publish is set to True) |
artifact_path |
Path of downloaded artifacts and logs from Azure Machine Learning (pipeline) run (will only be provided if wait_for_completion and download_artifacts is set to True) |
name: "Azure Machine Learning Run Action"
description: "Submit a run to an Azure Machine Learning Workspace with this GitHub Action"
author: "azure/gh-aml"
inputs:
azure_credentials:
description: "Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS"
required: true
parameters_file:
description: "JSON file including the parameters of the run."
required: true
default: "run.json"
outputs:
experiment_name:
description: "Name of the experiment of the run"
run_id:
description: "ID of the run"
run_url:
description: "URL to the run in the Azure Machine Learning Studio"
run_metrics:
description: "Metrics of the run (will only be provided if wait_for_completion is set to True)"
run_metrics_markdown:
description: "Metrics of the run formatted as markdown table (will only be provided if wait_for_completion is set to True)"
published_pipeline_id:
description: "Id of the publised pipeline (will only be provided if you submitted a pipeline and pipeline_publish is set to True)"
published_pipeline_status:
description: "Status of the publised pipeline (will only be provided if you submitted a pipeline and pipeline_publish is set to True)"
published_pipeline_endpoint:
description: "Endpoint of the publised pipeline (will only be provided if you submitted a pipeline and pipeline_publish is set to True)"
artifact_path:
description: "Path of downloaded artifacts and logs from Azure Machine Learning (pipeline) run (will only be provided if wait_for_completion and download_artifacts is set to True)"
branding:
icon: "chevron-up"
color: "blue"
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/tibdex/auto-update
Author: Thibault Derousseaux <tibdex@gmail.com>
Publisher: tibdex
Repository: github.com/tibdex/auto-update
Automatically keep pull requests with auto-merged enabled up to date with their base branch.
| Name | Required | Description |
|---|---|---|
github_token |
Optional | Token for the GitHub API. Default: ${{ github.token }} |
name: Auto-update
author: Thibault Derousseaux <tibdex@gmail.com>
description: Automatically keep pull requests with auto-merged enabled up to date with their base branch.
inputs:
github_token:
description: Token for the GitHub API.
default: ${{ github.token }}
runs:
using: node16
main: dist/index.js
branding:
icon: refresh-cw
color: blue
Action ID: marketplace/amirisback/android-modularization
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-modularization
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/coderabbitai/ai-pr-reviewer
Author: CodeRabbit LLC
Publisher: coderabbitai
Repository: github.com/coderabbitai/ai-pr-reviewer
AI-based PR Reviewer & Summarizer with Chat Capabilities
| Name | Required | Description |
|---|---|---|
debug |
Optional | Enable debug mode Default: false |
max_files |
Optional | Max files to summarize and review. Less than or equal to 0 means no limit. Default: 150 |
review_simple_changes |
Optional | Review even when the changes are simple Default: false |
review_comment_lgtm |
Optional | Leave comments even if the patch is LGTM Default: false |
path_filters |
Optional | The path filters, e.g., "src/**.py", "!dist/**", each line will be considered as one pattern.
See also
- https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpushpull_requestpull_request_targetpathspaths-ignore
- https://github.com/isaacs/minimatch
Default: !dist/**
!**/*.app
!**/*.bin
!**/*.bz2
!**/*.class
!**/*.db
!**/*.csv
!**/*.tsv
!**/*.dat
!**/*.dll
!**/*.dylib
!**/*.egg
!**/*.glif
!**/*.gz
!**/*.xz
!**/*.zip
!**/*.7z
!**/*.rar
!**/*.zst
!**/*.ico
!**/*.jar
!**/*.tar
!**/*.war
!**/*.lo
!**/*.log
!**/*.mp3
!**/*.wav
!**/*.wma
!**/*.mp4
!**/*.avi
!**/*.mkv
!**/*.wmv
!**/*.m4a
!**/*.m4v
!**/*.3gp
!**/*.3g2
!**/*.rm
!**/*.mov
!**/*.flv
!**/*.iso
!**/*.swf
!**/*.flac
!**/*.nar
!**/*.o
!**/*.ogg
!**/*.otf
!**/*.p
!**/*.pdf
!**/*.doc
!**/*.docx
!**/*.xls
!**/*.xlsx
!**/*.ppt
!**/*.pptx
!**/*.pkl
!**/*.pickle
!**/*.pyc
!**/*.pyd
!**/*.pyo
!**/*.pub
!**/*.pem
!**/*.rkt
!**/*.so
!**/*.ss
!**/*.eot
!**/*.exe
!**/*.pb.go
!**/*.lock
!**/*.ttf
!**/*.yaml
!**/*.yml
!**/*.cfg
!**/*.toml
!**/*.ini
!**/*.mod
!**/*.sum
!**/*.work
!**/*.json
!**/*.mmd
!**/*.svg
!**/*.jpeg
!**/*.jpg
!**/*.png
!**/*.gif
!**/*.bmp
!**/*.tiff
!**/*.webm
!**/*.woff
!**/*.woff2
!**/*.dot
!**/*.md5sum
!**/*.wasm
!**/*.snap
!**/*.parquet
!**/gen/**
!**/_gen/**
!**/generated/**
!**/@generated/**
!**/vendor/**
!**/*.min.js
!**/*.min.js.map
!**/*.min.js.css
!**/*.tfstate
!**/*.tfstate.backup
|
disable_review |
Optional | Only provide the summary and skip the code review. Default: false |
disable_release_notes |
Optional | Disable release notes Default: false |
openai_base_url |
Optional | The url of the openai api interface. Default: https://api.openai.com/v1 |
openai_light_model |
Optional | Model to use for simple tasks like summarizing diff on a file. Default: gpt-3.5-turbo |
openai_heavy_model |
Optional | Model to use for complex tasks such as code reviews. Default: gpt-4 |
openai_model_temperature |
Optional | Temperature for GPT model Default: 0.05 |
openai_retries |
Optional | How many times to retry OpenAI API in case of timeouts or errors? Default: 5 |
openai_timeout_ms |
Optional | Timeout for OpenAI API call in millis Default: 360000 |
openai_concurrency_limit |
Optional | How many concurrent API calls to make to OpenAI servers? Default: 6 |
github_concurrency_limit |
Optional | How many concurrent API calls to make to GitHub? Default: 6 |
system_message |
Optional | System message to be sent to OpenAI Default: You are `@coderabbitai` (aka `github-actions[bot]`), a language model
trained by OpenAI. Your purpose is to act as a highly experienced
software engineer and provide a thorough review of the code hunks
and suggest code snippets to improve key areas such as:
- Logic
- Security
- Performance
- Data races
- Consistency
- Error handling
- Maintainability
- Modularity
- Complexity
- Optimization
- Best practices: DRY, SOLID, KISS
Do not comment on minor code style issues, missing
comments/documentation. Identify and resolve significant
concerns to improve overall code quality while deliberately
disregarding minor issues.
|
summarize |
Optional | The prompt for final summarization response Default: Provide your final response in markdown with the following content:
- **Walkthrough**: A high-level summary of the overall change instead of
specific files within 80 words.
- **Changes**: A markdown table of files and their summaries. Group files
with similar changes together into a single row to save space.
- **Poem**: Below the changes, include a whimsical, short poem written by
a rabbit to celebrate the changes. Format the poem as a quote using
the ">" symbol and feel free to use emojis where relevant.
Avoid additional commentary as this summary will be added as a comment on the
GitHub pull request. Use the titles "Walkthrough" and "Changes" and they must be H2.
|
summarize_release_notes |
Optional | The prompt for generating release notes in the same chat as summarize stage Default: Craft concise release notes for the pull request.
Focus on the purpose and user impact, categorizing changes as "New Feature", "Bug Fix",
"Documentation", "Refactor", "Style", "Test", "Chore", or "Revert". Provide a bullet-point list,
e.g., "- New Feature: Added search functionality to the UI". Limit your response to 50-100 words
and emphasize features visible to the end-user while omitting code-level details.
|
language |
Optional | ISO code for the response language Default: en-US |
bot_icon |
Optional | The icon for the bot Default: <img src="https://avatars.githubusercontent.com/in/347564?s=41" alt="Image description" width="20" height="20"> |
name: 'AI-based PR Reviewer & Summarizer with Chat Capabilities'
description: 'AI-based PR Reviewer & Summarizer with Chat Capabilities'
branding:
icon: 'git-merge'
color: 'orange'
author: 'CodeRabbit LLC'
inputs:
debug:
required: false
description: 'Enable debug mode'
default: 'false'
max_files:
required: false
description:
'Max files to summarize and review. Less than or equal to 0 means no
limit.'
default: '150'
review_simple_changes:
required: false
description: 'Review even when the changes are simple'
default: 'false'
review_comment_lgtm:
required: false
description: 'Leave comments even if the patch is LGTM'
default: 'false'
path_filters:
required: false
description: |
The path filters, e.g., "src/**.py", "!dist/**", each line will be considered as one pattern.
See also
- https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpushpull_requestpull_request_targetpathspaths-ignore
- https://github.com/isaacs/minimatch
default: |
!dist/**
!**/*.app
!**/*.bin
!**/*.bz2
!**/*.class
!**/*.db
!**/*.csv
!**/*.tsv
!**/*.dat
!**/*.dll
!**/*.dylib
!**/*.egg
!**/*.glif
!**/*.gz
!**/*.xz
!**/*.zip
!**/*.7z
!**/*.rar
!**/*.zst
!**/*.ico
!**/*.jar
!**/*.tar
!**/*.war
!**/*.lo
!**/*.log
!**/*.mp3
!**/*.wav
!**/*.wma
!**/*.mp4
!**/*.avi
!**/*.mkv
!**/*.wmv
!**/*.m4a
!**/*.m4v
!**/*.3gp
!**/*.3g2
!**/*.rm
!**/*.mov
!**/*.flv
!**/*.iso
!**/*.swf
!**/*.flac
!**/*.nar
!**/*.o
!**/*.ogg
!**/*.otf
!**/*.p
!**/*.pdf
!**/*.doc
!**/*.docx
!**/*.xls
!**/*.xlsx
!**/*.ppt
!**/*.pptx
!**/*.pkl
!**/*.pickle
!**/*.pyc
!**/*.pyd
!**/*.pyo
!**/*.pub
!**/*.pem
!**/*.rkt
!**/*.so
!**/*.ss
!**/*.eot
!**/*.exe
!**/*.pb.go
!**/*.lock
!**/*.ttf
!**/*.yaml
!**/*.yml
!**/*.cfg
!**/*.toml
!**/*.ini
!**/*.mod
!**/*.sum
!**/*.work
!**/*.json
!**/*.mmd
!**/*.svg
!**/*.jpeg
!**/*.jpg
!**/*.png
!**/*.gif
!**/*.bmp
!**/*.tiff
!**/*.webm
!**/*.woff
!**/*.woff2
!**/*.dot
!**/*.md5sum
!**/*.wasm
!**/*.snap
!**/*.parquet
!**/gen/**
!**/_gen/**
!**/generated/**
!**/@generated/**
!**/vendor/**
!**/*.min.js
!**/*.min.js.map
!**/*.min.js.css
!**/*.tfstate
!**/*.tfstate.backup
disable_review:
required: false
description: 'Only provide the summary and skip the code review.'
default: 'false'
disable_release_notes:
required: false
description: 'Disable release notes'
default: 'false'
openai_base_url:
required: false
description: 'The url of the openai api interface.'
default: 'https://api.openai.com/v1'
openai_light_model:
required: false
description:
'Model to use for simple tasks like summarizing diff on a file.'
default: 'gpt-3.5-turbo'
openai_heavy_model:
required: false
description: 'Model to use for complex tasks such as code reviews.'
default: 'gpt-4'
openai_model_temperature:
required: false
description: 'Temperature for GPT model'
default: '0.05'
openai_retries:
required: false
description:
'How many times to retry OpenAI API in case of timeouts or errors?'
default: '5'
openai_timeout_ms:
required: false
description: 'Timeout for OpenAI API call in millis'
default: '360000'
openai_concurrency_limit:
required: false
description: 'How many concurrent API calls to make to OpenAI servers?'
default: '6'
github_concurrency_limit:
required: false
description: 'How many concurrent API calls to make to GitHub?'
default: '6'
system_message:
required: false
description: 'System message to be sent to OpenAI'
default: |
You are `@coderabbitai` (aka `github-actions[bot]`), a language model
trained by OpenAI. Your purpose is to act as a highly experienced
software engineer and provide a thorough review of the code hunks
and suggest code snippets to improve key areas such as:
- Logic
- Security
- Performance
- Data races
- Consistency
- Error handling
- Maintainability
- Modularity
- Complexity
- Optimization
- Best practices: DRY, SOLID, KISS
Do not comment on minor code style issues, missing
comments/documentation. Identify and resolve significant
concerns to improve overall code quality while deliberately
disregarding minor issues.
summarize:
required: false
description: 'The prompt for final summarization response'
default: |
Provide your final response in markdown with the following content:
- **Walkthrough**: A high-level summary of the overall change instead of
specific files within 80 words.
- **Changes**: A markdown table of files and their summaries. Group files
with similar changes together into a single row to save space.
- **Poem**: Below the changes, include a whimsical, short poem written by
a rabbit to celebrate the changes. Format the poem as a quote using
the ">" symbol and feel free to use emojis where relevant.
Avoid additional commentary as this summary will be added as a comment on the
GitHub pull request. Use the titles "Walkthrough" and "Changes" and they must be H2.
summarize_release_notes:
required: false
description:
'The prompt for generating release notes in the same chat as summarize
stage'
default: |
Craft concise release notes for the pull request.
Focus on the purpose and user impact, categorizing changes as "New Feature", "Bug Fix",
"Documentation", "Refactor", "Style", "Test", "Chore", or "Revert". Provide a bullet-point list,
e.g., "- New Feature: Added search functionality to the UI". Limit your response to 50-100 words
and emphasize features visible to the end-user while omitting code-level details.
language:
required: false
description: ISO code for the response language
default: en-US
bot_icon:
required: false
description: 'The icon for the bot'
default: '<img src="https://avatars.githubusercontent.com/in/347564?s=41" alt="Image description" width="20" height="20">'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/aws-actions/amazon-ecs-render-task-definition
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/amazon-ecs-render-task-definition
Inserts a container image URI into a container definition in an Amazon ECS task definition JSON file, creating a new file.
| Name | Required | Description |
|---|---|---|
task-definition |
Optional | The path to the ECS task definition JSON file |
task-definition-arn |
Optional | The full Amazon Resource Name (ARN) of the task definition to be used |
task-definition-family |
Optional | The name of a family that the task definition is registered to |
task-definition-revision |
Optional | The revision of the task definition being used |
container-name |
Required | The name of the container or containers defined in the containerDefinitions section of the ECS task definition. If more than one container, add the names comma separated. |
image |
Required | The URI of the container image to insert into the ECS task definition |
environment-variables |
Optional | Variables to add to the container. Each variable is of the form KEY=value, you can specify multiple variables with multi-line YAML strings. |
log-configuration-log-driver |
Optional | Create/Override logDriver inside logConfiguration |
log-configuration-options |
Optional | Create/Override options inside logConfiguration. Each variable is of the form key=value, you can specify multiple variables with multi-line YAML strings. |
docker-labels |
Optional | Create/Override options inside dockerLabels. Each variable is key=value, you can specify multiple variables with multi-line YAML. |
command |
Optional | The command used by ECS to start the container image |
env-files |
Optional | S3 object arns to set env variables onto the container. You can specify multiple files with multi-line YAML strings. |
secrets |
Optional | Secrets to add to the container. Each secret is of the form KEY=valueFrom, where valueFrom is a secret arn. You can specify multiple secrets with multi-line YAML strings. |
| Name | Description |
|---|---|
task-definition |
The path to the rendered task definition file |
name: 'Amazon ECS "Render Task Definition" Action for GitHub Actions'
description: 'Inserts a container image URI into a container definition in an Amazon ECS task definition JSON file, creating a new file.'
branding:
icon: 'cloud'
color: 'orange'
inputs:
task-definition:
description: 'The path to the ECS task definition JSON file'
required: false
task-definition-arn:
description: 'The full Amazon Resource Name (ARN) of the task definition to be used'
required: false
task-definition-family:
description: 'The name of a family that the task definition is registered to'
required: false
task-definition-revision:
description: 'The revision of the task definition being used'
required: false
container-name:
description: 'The name of the container or containers defined in the containerDefinitions section of the ECS task definition. If more than one container, add the names comma separated.'
required: true
image:
description: 'The URI of the container image to insert into the ECS task definition'
required: true
environment-variables:
description: 'Variables to add to the container. Each variable is of the form KEY=value, you can specify multiple variables with multi-line YAML strings.'
required: false
log-configuration-log-driver:
description: "Create/Override logDriver inside logConfiguration"
required: false
log-configuration-options:
description: "Create/Override options inside logConfiguration. Each variable is of the form key=value, you can specify multiple variables with multi-line YAML strings."
required: false
docker-labels:
description: "Create/Override options inside dockerLabels. Each variable is key=value, you can specify multiple variables with multi-line YAML."
required: false
command:
description: 'The command used by ECS to start the container image'
required: false
env-files:
description: 'S3 object arns to set env variables onto the container. You can specify multiple files with multi-line YAML strings.'
required: false
secrets:
description: 'Secrets to add to the container. Each secret is of the form KEY=valueFrom, where valueFrom is a secret arn. You can specify multiple secrets with multi-line YAML strings.'
required: false
outputs:
task-definition:
description: 'The path to the rendered task definition file'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/axel-op/package-java-google-cloud-function
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/package-java-google-cloud-function
Package a Google Cloud Function in Java
| Name | Required | Description |
|---|---|---|
working-directory |
Optional | The root directory of the function Default: . |
| Name | Description |
|---|---|
deployment-directory |
The directory containing the files to be deployed |
deployment-file |
The zip file to be deployed |
name: "Package a Google Cloud Function in Java"
description: "Package a Google Cloud Function in Java"
branding:
icon: package
color: red
inputs:
working-directory:
description: "The root directory of the function"
required: false
default: "."
outputs:
deployment-directory:
description: "The directory containing the files to be deployed"
value: ${{ steps.package.outputs.deployment-directory }}
deployment-file:
description: "The zip file to be deployed"
value: ${{ steps.package.outputs.deployment-file }}
runs:
using: "composite"
steps:
- name: Package
id: package
working-directory: ${{ inputs.working-directory }}
run: "${GITHUB_ACTION_PATH}/package.sh"
shell: bash
env:
DEPLOYMENT_DIR: deployment
Action ID: marketplace/actions/setup-haskell
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-haskell
Set up a specific version of GHC and Cabal and add the command-line tools to the PATH
| Name | Required | Description |
|---|---|---|
ghc-version |
Optional | Version of GHC to use. If set to "latest", it will always get the latest stable version. Default: latest |
cabal-version |
Optional | Version of Cabal to use. If set to "latest", it will always get the latest stable version. Default: latest |
stack-version |
Optional | Version of Stack to use. If set to "latest", it will always get the latest stable version. Default: latest |
enable-stack |
Optional | If specified, will setup Stack |
stack-no-global |
Optional | If specified, enable-stack must be set. Prevents installing GHC and Cabal globally |
stack-setup-ghc |
Optional | If specified, enable-stack must be set. Will run stack setup to install the specified GHC |
| Name | Description |
|---|---|
ghc-path |
The path of the ghc executable _directory_ |
cabal-path |
The path of the cabal executable _directory_ |
stack-path |
The path of the stack executable _directory_ |
cabal-store |
The path to the cabal store |
ghc-exe |
The path of the ghc _executable_ |
cabal-exe |
The path of the cabal _executable_ |
stack-exe |
The path of the stack _executable_ |
name: 'Setup Haskell'
description: 'Set up a specific version of GHC and Cabal and add the command-line tools to the PATH'
author: 'GitHub'
inputs:
ghc-version:
required: false
description: 'Version of GHC to use. If set to "latest", it will always get the latest stable version.'
default: 'latest'
cabal-version:
required: false
description: 'Version of Cabal to use. If set to "latest", it will always get the latest stable version.'
default: 'latest'
stack-version:
required: false
description: 'Version of Stack to use. If set to "latest", it will always get the latest stable version.'
default: 'latest'
enable-stack:
required: false
description: 'If specified, will setup Stack'
stack-no-global:
required: false
description: 'If specified, enable-stack must be set. Prevents installing GHC and Cabal globally'
stack-setup-ghc:
required: false
description: 'If specified, enable-stack must be set. Will run stack setup to install the specified GHC'
outputs:
ghc-path:
description: 'The path of the ghc executable _directory_'
cabal-path:
description: 'The path of the cabal executable _directory_'
stack-path:
description: 'The path of the stack executable _directory_'
cabal-store:
description: 'The path to the cabal store'
ghc-exe:
description: 'The path of the ghc _executable_'
cabal-exe:
description: 'The path of the cabal _executable_'
stack-exe:
description: 'The path of the stack _executable_'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/actions-hub/gcloud
Author: Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>
Publisher: actions-hub
Repository: github.com/actions-hub/gcloud
GitHub Action with all the components of the Google Cloud SDK
| Name | Required | Description |
|---|---|---|
args |
Required | Arguments for the CLI command |
cli |
Optional | cli command to run (gcloud, gsutil and kubectl available) Default: gcloud |
name: 'Google Cloud Platform (GCP) CLI - gcloud'
description: 'GitHub Action with all the components of the Google Cloud SDK'
author: 'Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>'
branding:
icon: 'cloud'
color: 'blue'
inputs:
args:
description: 'Arguments for the CLI command'
required: true
cli:
description: 'cli command to run (gcloud, gsutil and kubectl available)'
default: 'gcloud'
required: false
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/aws-actions/terraform-aws-iam-policy-validator
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/terraform-aws-iam-policy-validator
Validate IAM Policies in TF templates using ValidatePolicy, CheckAccessNotGranted & CheckNoNewAccess API in Access Analyzer
| Name | Required | Description |
|---|---|---|
policy-check-type |
Required | Type of the policy check. Valid values: VALIDATE_POLICY, CHECK_NO_NEW_ACCESS, CHECK_ACCESS_NOT_GRANTED |
template-path |
Required | The path to the Terraform plan file (JSON). |
region |
Required | The destination region the resources will be deployed to. |
ignore-finding |
Optional | Allow validation failures to be ignored. Specify as a comma separated list of findings to be ignored. Can be individual finding codes (e.g. "PASS_ROLE_WITH_STAR_IN_RESOURCE"), a specific resource name (e.g. "MyResource"), or a combination of both separated by a period.(e.g. "MyResource.PASS_ROLE_WITH_STAR_IN_RESOURCE"). Names of finding codes may change in IAM Access Analyzer over time. Valid options: FINDING_CODE,RESOURCE_NAME,RESOURCE_NAME.FINDING_CODE |
actions |
Optional | List of comma-separated actions. Example format - ACTION,ACTION,ACTION. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED". At least one of "actions" or "resources" must be specified. |
resources |
Optional | List of comma-separated resource ARNs. Example format - RESOURCE,RESOURCE,RESOURCE. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED" At least one of "actions" or "resources" must be specified. |
reference-policy |
Optional | A JSON formatted file that specifies the path to the reference policy that is used for a permissions comparison. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS" |
reference-policy-type |
Optional | The policy type associated with the IAM policy under analysis and the reference policy. Valid values: IDENTITY, RESOURCE. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS" |
treat-finding-type-as-blocking |
Optional | Specify which finding types should be treated as blocking. Other finding types are treated as non blocking. If the tool detects any blocking finding types, it will exit with a non-zero exit code. If all findings are non blocking or there are no findings, the tool exits with an exit code of 0. Defaults to "ERROR" and "SECURITY_WARNING". Specify as a comma separated list of finding types that should be blocking. Pass "NONE" to ignore all findings. This attribute is considered only when policy-check-type is "VALIDATE_POLICY" |
treat-findings-as-non-blocking |
Optional | When not specified, the tool detects any findings, it will exit with a non-zero exit code. When specified, the tool exits with an exit code of 0. This attribute is considered only when policy-check-type is "CHECK_NO_NEW_ACCESS" or "CHECK_ACCESS_NOT_GRANTED" Default: False |
allow-external-principals |
Optional | A comma separated list of external principals that should be ignored. Specify as a comma separated list of a 12 digit AWS account ID, a federated web identity user, a federated SAML user, or an ARN. Specify \"*\" to allow anonymous access. (e.g. 123456789123,arn:aws:iam::111111111111:role/MyOtherRole,graph.facebook.com). Valid options: ACCOUNT,ARN". This attribute is considered only when policy-check-type is "VALIDATE_POLICY" |
allow-dynamic-ref-without-version |
Optional | Override the default behavior and allow dynamic SSM references without version numbers. The version number ensures that the SSM parameter value that was validated is the one that is deployed. |
exclude-resource-types |
Optional | List of comma-separated resource types. Resource types should be the same as Cloudformation template resource names such as AWS::IAM::Role, AWS::S3::Bucket. Valid option syntax: AWS::SERVICE::RESOURCE |
| Name | Description |
|---|---|
result |
Result of the policy checks |
name: 'Policy checks to validate AWS IAM policies in Terraform templates" Action For GitHub Actions'
description: "Validate IAM Policies in TF templates using ValidatePolicy, CheckAccessNotGranted & CheckNoNewAccess API in Access Analyzer"
branding:
icon: "cloud"
color: "orange"
inputs:
policy-check-type:
description: "Type of the policy check. Valid values: VALIDATE_POLICY, CHECK_NO_NEW_ACCESS, CHECK_ACCESS_NOT_GRANTED"
required: true
template-path:
description: "The path to the Terraform plan file (JSON)."
required: true
region:
description: "The destination region the resources will be deployed to."
required: true
ignore-finding:
description: 'Allow validation failures to be ignored. Specify as a comma separated list of findings to be ignored. Can be individual finding codes (e.g. "PASS_ROLE_WITH_STAR_IN_RESOURCE"), a specific resource name (e.g. "MyResource"), or a combination of both separated by a period.(e.g. "MyResource.PASS_ROLE_WITH_STAR_IN_RESOURCE"). Names of finding codes may change in IAM Access Analyzer over time. Valid options: FINDING_CODE,RESOURCE_NAME,RESOURCE_NAME.FINDING_CODE'
actions:
description: 'List of comma-separated actions. Example format - ACTION,ACTION,ACTION. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED". At least one of "actions" or "resources" must be specified.'
resources:
description: 'List of comma-separated resource ARNs. Example format - RESOURCE,RESOURCE,RESOURCE. This attribute is considered when policy-check-type is "CHECK_ACCESS_NOT_GRANTED" At least one of "actions" or "resources" must be specified.'
reference-policy:
description: 'A JSON formatted file that specifies the path to the reference policy that is used for a permissions comparison. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS"'
reference-policy-type:
description: 'The policy type associated with the IAM policy under analysis and the reference policy. Valid values: IDENTITY, RESOURCE. This attribute is considered and required when policy-check-type is "CHECK_NO_NEW_ACCESS"'
treat-finding-type-as-blocking:
description: 'Specify which finding types should be treated as blocking. Other finding types are treated as non blocking. If the tool detects any blocking finding types, it will exit with a non-zero exit code. If all findings are non blocking or there are no findings, the tool exits with an exit code of 0. Defaults to "ERROR" and "SECURITY_WARNING". Specify as a comma separated list of finding types that should be blocking. Pass "NONE" to ignore all findings. This attribute is considered only when policy-check-type is "VALIDATE_POLICY"'
treat-findings-as-non-blocking:
description: 'When not specified, the tool detects any findings, it will exit with a non-zero exit code. When specified, the tool exits with an exit code of 0. This attribute is considered only when policy-check-type is "CHECK_NO_NEW_ACCESS" or "CHECK_ACCESS_NOT_GRANTED"'
default: "False"
allow-external-principals:
description: 'A comma separated list of external principals that should be ignored. Specify as a comma separated list of a 12 digit AWS account ID, a federated web identity user, a federated SAML user, or an ARN. Specify \"*\" to allow anonymous access. (e.g. 123456789123,arn:aws:iam::111111111111:role/MyOtherRole,graph.facebook.com). Valid options: ACCOUNT,ARN". This attribute is considered only when policy-check-type is "VALIDATE_POLICY"'
allow-dynamic-ref-without-version:
description: "Override the default behavior and allow dynamic SSM references without version numbers. The version number ensures that the SSM parameter value that was validated is the one that is deployed."
exclude-resource-types:
description: "List of comma-separated resource types. Resource types should be the same as Cloudformation template resource names such as AWS::IAM::Role, AWS::S3::Bucket. Valid option syntax: AWS::SERVICE::RESOURCE"
outputs:
result:
description: "Result of the policy checks"
runs:
using: "docker"
image: Dockerfile
args:
- ${{ inputs.policy-check-type}}
- ${{ inputs.template-path }}
- ${{ inputs.region }}
- ${{ inputs.ignore-finding }}
- ${{ inputs.actions }}
- ${{ inputs.reference-policy }}
- ${{ inputs.reference-policy-type }}
- ${{ inputs.treat-finding-type-as-blocking }}
- ${{ inputs.treat-findings-as-non-blocking }}
- ${{ inputs.allow-external-principals }}
- ${{ inputs.allow-dynamic-ref-without-version }}
- ${{ inputs.exclude-resource-types }}
Action ID: marketplace/srt32/uptime
Author: Unknown
Publisher: srt32
Repository: github.com/srt32/uptime
Ping an URL and check HTTP status code
| Name | Required | Description |
|---|---|---|
url-to-hit |
Required | which url to hit |
expected-statuses |
Optional | which http response statuses are expected Default: 200 |
user-agent-to-use |
Optional | User agent to use while sending the request |
| Name | Description |
|---|---|
status |
The http status we got |
name: 'Uptime Action'
description: 'Ping an URL and check HTTP status code'
inputs:
url-to-hit:
description: 'which url to hit'
required: true
expected-statuses:
description: 'which http response statuses are expected'
required: false
default: "200"
user-agent-to-use:
description: 'User agent to use while sending the request'
required: false
default: ''
outputs:
status: # id of output
description: 'The http status we got'
runs:
using: 'node12'
main: 'index.js'
Action ID: marketplace/peter-evans/create-or-update-project-card
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/create-or-update-project-card
A GitHub action to create or update a project card
| Name | Required | Description |
|---|---|---|
token |
Optional | GITHUB_TOKEN or a repo scoped PAT Default: ${{ github.token }} |
project-location |
Optional | The location of the project. Either a repository, organization, or user.
Default: ${{ github.repository }} |
project-number |
Optional | The number of the project. Semi-required. Either `project-number` OR `project-name` must be supplied. |
project-name |
Optional | The name of the project. Semi-required. Either `project-number` OR `project-name` must be supplied. Note that a project's name is not unique. The action will use the first matching project found. |
column-name |
Required | The name of the column to add a card to, or move an existing card to. |
repository |
Optional | The GitHub repository containing the issue or pull request. Default: ${{ github.repository }} |
issue-number |
Optional | The issue or pull request number to associate with the card. Default: ${{ github.event.issue.number }} |
| Name | Description |
|---|---|
card-id |
The ID of the card. |
name: 'Create or Update Project Card'
description: 'A GitHub action to create or update a project card'
inputs:
token:
description: 'GITHUB_TOKEN or a repo scoped PAT'
default: ${{ github.token }}
project-location:
description: >
The location of the project. Either a repository, organization, or user.
default: ${{ github.repository }}
project-number:
description: >
The number of the project.
Semi-required. Either `project-number` OR `project-name` must be supplied.
project-name:
description: >
The name of the project.
Semi-required. Either `project-number` OR `project-name` must be supplied.
Note that a project's name is not unique. The action will use the first matching project found.
column-name:
description: 'The name of the column to add a card to, or move an existing card to.'
required: true
repository:
description: 'The GitHub repository containing the issue or pull request.'
default: ${{ github.repository }}
issue-number:
description: 'The issue or pull request number to associate with the card.'
default: ${{ github.event.issue.number }}
outputs:
card-id:
description: 'The ID of the card.'
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'file-plus'
color: 'gray-dark'
Action ID: marketplace/primer/vercel-action
Author: Unknown
Publisher: primer
Repository: github.com/primer/vercel-action
This action make a deployment with github actions instead of Vercel builder.
| Name | Required | Description |
|---|---|---|
vercel-token |
Required | Vercel token |
vercel-args |
Optional | |
github-comment |
Optional | if you want to comment on pr and commit, set true, default: true Default: true |
github-token |
Optional | if you want to comment on pr and commit, set token |
github-deployment |
Optional | if you want to create github deployment, set true, default: false Default: false |
working-directory |
Optional | the working directory |
vercel-project-id |
Optional | Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F) |
vercel-org-id |
Optional | Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F) |
vercel-project-name |
Optional | The name of the project; if absent we'll use the `vercel inspect` command to determine. |
scope |
Optional | If you are work in team scope, you should set this value to your team id. |
zeit-token |
Required | zeit.co token |
now-args |
Optional | |
now-project-id |
Optional | Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F) |
now-org-id |
Optional | Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F) |
alias-domains |
Optional | You can assign a domain to this deployment. Please note that this domain must have been configured in the project. You can use pull request number via `{{PR_NUMBER}}` and branch via `{{BRANCH}}` |
| Name | Description |
|---|---|
preview-url |
deployment preview URL |
preview-name |
deployment preview name |
name: 'Vercel Action'
description: 'This action make a deployment with github actions instead of Vercel builder.'
inputs:
vercel-token:
description: 'Vercel token'
required: true
vercel-args:
description: ''
required: false
default: ''
github-comment:
required: false
default: 'true'
description: 'if you want to comment on pr and commit, set true, default: true'
github-token:
required: false
description: 'if you want to comment on pr and commit, set token'
github-deployment:
required: false
default: 'false'
description: 'if you want to create github deployment, set true, default: false'
working-directory:
required: false
description: the working directory
vercel-project-id:
required: false
description: 'Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F)'
vercel-org-id:
required: false
description: 'Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F)'
vercel-project-name:
required: false
description: "The name of the project; if absent we'll use the `vercel inspect` command to determine."
scope:
required: false
description: 'If you are work in team scope, you should set this value to your team id.'
zeit-token:
description: 'zeit.co token'
required: true
deprecationMessage: 'Zeit is now Vercel. Use "vercel-token" instead'
now-args:
description: ''
required: false
default: ''
deprecationMessage: 'Zeit is now Vercel. Use "vercel-args" instead.'
now-project-id:
required: false
description: 'Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F)'
deprecationMessage: 'Zeit is now Vercel. Use "vercel-project-id" instead.'
now-org-id:
required: false
description: 'Vercel CLI 17+, ❗️ The `name` property in vercel.json is deprecated (https://zeit.ink/5F)'
deprecationMessage: 'Zeit is now Vercel. Use "vercel-org-id" instead.'
alias-domains:
required: false
description: 'You can assign a domain to this deployment. Please note that this domain must have been configured in the project. You can use pull request number via `{{PR_NUMBER}}` and branch via `{{BRANCH}}`'
default: ''
outputs:
preview-url:
description: 'deployment preview URL'
preview-name:
description: 'deployment preview name'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
icon: 'triangle'
color: 'white'
Action ID: marketplace/julia-actions/julia-docdeploy
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-docdeploy
Build and deploy the documentation for a Julia package.
| Name | Required | Description |
|---|---|---|
prefix |
Optional | Value inserted in front of the julia command, e.g. for running xvfb-run julia [...] |
install-package |
Optional | Whether or not to install the package with `Pkg.develop` into the `docs` environment Default: True |
project |
Optional | Directory that contains the folder `docs` to deploy from Default: . |
name: 'Deploy package documentation'
description: 'Build and deploy the documentation for a Julia package.'
author: 'David Anthoff'
branding:
icon: 'upload'
color: 'gray-dark'
inputs:
prefix:
description: 'Value inserted in front of the julia command, e.g. for running xvfb-run julia [...]'
default: ''
required: false
install-package:
description: 'Whether or not to install the package with `Pkg.develop` into the `docs` environment'
default: true
required: false
project:
description: 'Directory that contains the folder `docs` to deploy from'
default: '.'
required: false
runs:
using: 'composite'
steps:
- name: Install GitHubActions.jl in its own (shared) environment
run: |
using Pkg
Pkg.activate("docs-logger-env"; shared=true)
Pkg.add(Pkg.PackageSpec(name="GitHubActions", version="0.1"))
shell: julia --color=yes {0}
working-directory: ${{inputs.project}}
- name: Install the current package into the `docs` environment
run: |
# The Julia command that will be executed
julia_cmd=( julia --color=yes --code-coverage --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()' )
# Add the prefix in front of the command if there is one
prefix=( ${{ inputs.prefix }} )
[[ ${#prefix[@]} -gt 0 ]] && julia_cmd=( "${prefix[@]}" "${julia_cmd[@]}" )
# Run the Julia command
"${julia_cmd[@]}"
shell: bash
working-directory: ${{inputs.project}}
if: ${{ inputs.install-package == 'true'}}
- name: Build the documentation
run: |
# The Julia command that will be executed
julia_cmd=( julia --color=yes --project=docs/ -e '
@eval Module() begin
push!(LOAD_PATH, "@docs-logger-env") # access GitHubActions.jl
import Logging, GitHubActions
Logging.global_logger(GitHubActions.GitHubActionsLogger())
pop!(LOAD_PATH)
end
include("docs/make.jl")' )
# Add the prefix in front of the command if there is one
prefix="${{ inputs.prefix }}"
[[ -n $prefix ]] && julia_cmd=( "$prefix" "${julia_cmd[@]}" )
# Run the Julia command
"${julia_cmd[@]}"
shell: bash
working-directory: ${{inputs.project}}
Action ID: marketplace/azure/WordPress-Azure-App-Service
Author: Unknown
Publisher: azure
Repository: github.com/azure/WordPress-Azure-App-Service
This action helps create an Azure App service and deploy Wordpress on it
| Name | Required | Description |
|---|---|---|
client-id |
Required | Client id to login to azure |
tenant-id |
Required | Tenant id to login to azure |
subscription-id |
Required | Subscription id to be used with your az login |
resource-group-name |
Required | Resource group to deploy your resources to |
name: 'Deploy a Wordpress on Azure App service'
description: 'This action helps create an Azure App service and deploy Wordpress on it'
branding:
icon: 'play-circle'
color: 'blue'
inputs:
client-id:
description: 'Client id to login to azure'
required: true
tenant-id:
description: 'Tenant id to login to azure'
required: true
subscription-id:
description: 'Subscription id to be used with your az login'
required: true
resource-group-name:
description: 'Resource group to deploy your resources to'
required: true
runs:
using: 'composite'
steps:
- name: 'Checkout master'
uses: actions/checkout@v3
- name: 'az cli login'
uses: azure/login@v1
with:
client-id: ${{ inputs.client-id }}
tenant-id: ${{ inputs.tenant-id }}
subscription-id: ${{ inputs.subscription-id }}
enable-AzPSSession: false
- name: 'Az deploy - wordpress on app svc'
uses: azure/arm-deploy@v1
with:
subscriptionId: ${{ inputs.subscription-id }}
resourceGroupName: ${{ inputs.resource-group-name }}
template: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/b41b420ccb55ff45a032b83b2f85e65d6fa16aae/application-workloads/wordpress/wordpress-app-service-mysql-inapp/azuredeploy.json
parameters: https://raw.githubusercontent.com/Azure/azure-quickstart-templates/b41b420ccb55ff45a032b83b2f85e65d6fa16aae/application-workloads/wordpress/wordpress-app-service-mysql-inapp/azuredeploy.parameters.json
failOnStdErr: false
Action ID: marketplace/MansaGroup/gcs-cache-action
Author: MansaGroup
Publisher: MansaGroup
Repository: github.com/MansaGroup/gcs-cache-action
Cache your workload to a Google Cloud Storage bucket
| Name | Required | Description |
|---|---|---|
bucket |
Required | Name of the bucket to store the cache into |
path |
Required | Paths to store |
key |
Required | Key to use as cache identifier |
restore-keys |
Optional | Substitution keys to use when cache miss |
| Name | Description |
|---|---|
cache-hit |
Whether the cache was successfuly restored |
name: 'Google Cloud Storage Cache'
author: 'MansaGroup'
description: 'Cache your workload to a Google Cloud Storage bucket'
inputs:
bucket:
description: Name of the bucket to store the cache into
required: true
path:
description: Paths to store
required: true
key:
description: Key to use as cache identifier
required: true
restore-keys:
description: Substitution keys to use when cache miss
default: ''
required: false
outputs:
cache-hit:
description: Whether the cache was successfuly restored
runs:
using: 'node16'
main: 'dist/main/index.js'
post: 'dist/post/index.js'
post-if: 'success()'
branding:
icon: 'hard-drive'
color: 'blue'
Action ID: marketplace/amannn/action-semantic-pull-request
Author: Jan Amann <jan@amann.work>
Publisher: amannn
Repository: github.com/amannn/action-semantic-pull-request
Ensure your PR title matches the Conventional Commits spec (https://www.conventionalcommits.org/).
| Name | Required | Description |
|---|---|---|
types |
Optional | Provide custom types (newline delimited) if you don't want the default ones from https://www.conventionalcommits.org. These are regex patterns auto-wrapped in `^ $`. |
scopes |
Optional | Configure which scopes are allowed (newline delimited). These are regex patterns auto-wrapped in `^ $`. |
requireScope |
Optional | Configure that a scope must always be provided. |
disallowScopes |
Optional | Configure which scopes are disallowed in PR titles (newline delimited). These are regex patterns auto-wrapped in ` ^$`. |
subjectPattern |
Optional | Configure additional validation for the subject based on a regex. E.g. '^(?![A-Z]).+$' ensures the subject doesn't start with an uppercase character. |
subjectPatternError |
Optional | If `subjectPattern` is configured, you can use this property to override the default error message that is shown when the pattern doesn't match. The variables `subject` and `title` can be used within the message. |
validateSingleCommit |
Optional | When using "Squash and merge" on a PR with only one commit, GitHub will suggest using that commit message instead of the PR title for the merge commit, and it's easy to commit this by mistake. Enable this option to also validate the commit message for one commit PRs. |
validateSingleCommitMatchesPrTitle |
Optional | Related to `validateSingleCommit` you can opt-in to validate that the PR title matches a single commit to avoid confusion. |
githubBaseUrl |
Optional | The GitHub base URL will be automatically set to the correct value from the GitHub context variable. If you want to override this, you can do so here (not recommended). Default: ${{ github.api_url }} |
ignoreLabels |
Optional | If the PR contains one of these labels (newline delimited), the validation is skipped. If you want to rerun the validation when labels change, you might want to use the `labeled` and `unlabeled` event triggers in your workflow. |
headerPattern |
Optional | If you're using a format for the PR title that differs from the traditional Conventional Commits spec, you can use this to customize the parsing of the type, scope and subject. The `headerPattern` should contain a regex where the capturing groups in parentheses correspond to the parts listed in `headerPatternCorrespondence`. For more details see: https://github.com/conventional-changelog/conventional-changelog/tree/master/packages/conventional-commits-parser#headerpattern |
headerPatternCorrespondence |
Optional | If `headerPattern` is configured, you can use this to define which capturing groups correspond to the type, scope and subject. |
wip |
Optional | For work-in-progress PRs you can typically use draft pull requests from Github. However, private repositories on the free plan don't have this option and therefore this action allows you to opt-in to using the special '[WIP]' prefix to indicate this state. This will avoid the validation of the PR title and the pull request checks remain pending. Note that a second check will be reported if this is enabled. |
name: semantic-pull-request
author: Jan Amann <jan@amann.work>
description: Ensure your PR title matches the Conventional Commits spec (https://www.conventionalcommits.org/).
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'shield'
color: 'green'
inputs:
types:
description: "Provide custom types (newline delimited) if you don't want the default ones from https://www.conventionalcommits.org. These are regex patterns auto-wrapped in `^ $`."
required: false
scopes:
description: 'Configure which scopes are allowed (newline delimited). These are regex patterns auto-wrapped in `^ $`.'
required: false
requireScope:
description: 'Configure that a scope must always be provided.'
required: false
disallowScopes:
description: 'Configure which scopes are disallowed in PR titles (newline delimited). These are regex patterns auto-wrapped in ` ^$`.'
required: false
subjectPattern:
description: "Configure additional validation for the subject based on a regex. E.g. '^(?![A-Z]).+$' ensures the subject doesn't start with an uppercase character."
required: false
subjectPatternError:
description: "If `subjectPattern` is configured, you can use this property to override the default error message that is shown when the pattern doesn't match. The variables `subject` and `title` can be used within the message."
required: false
validateSingleCommit:
description: 'When using "Squash and merge" on a PR with only one commit, GitHub will suggest using that commit message instead of the PR title for the merge commit, and it''s easy to commit this by mistake. Enable this option to also validate the commit message for one commit PRs.'
required: false
validateSingleCommitMatchesPrTitle:
description: 'Related to `validateSingleCommit` you can opt-in to validate that the PR title matches a single commit to avoid confusion.'
required: false
githubBaseUrl:
description: 'The GitHub base URL will be automatically set to the correct value from the GitHub context variable. If you want to override this, you can do so here (not recommended).'
required: false
default: '${{ github.api_url }}'
ignoreLabels:
description: 'If the PR contains one of these labels (newline delimited), the validation is skipped. If you want to rerun the validation when labels change, you might want to use the `labeled` and `unlabeled` event triggers in your workflow.'
required: false
headerPattern:
description: "If you're using a format for the PR title that differs from the traditional Conventional Commits spec, you can use this to customize the parsing of the type, scope and subject. The `headerPattern` should contain a regex where the capturing groups in parentheses correspond to the parts listed in `headerPatternCorrespondence`. For more details see: https://github.com/conventional-changelog/conventional-changelog/tree/master/packages/conventional-commits-parser#headerpattern"
required: false
headerPatternCorrespondence:
description: 'If `headerPattern` is configured, you can use this to define which capturing groups correspond to the type, scope and subject.'
required: false
wip:
description: "For work-in-progress PRs you can typically use draft pull requests from Github. However, private repositories on the free plan don't have this option and therefore this action allows you to opt-in to using the special '[WIP]' prefix to indicate this state. This will avoid the validation of the PR title and the pull request checks remain pending. Note that a second check will be reported if this is enabled."
required: false
Action ID: marketplace/azure/publish-security-assessments
Author: Unknown
Publisher: azure
Repository: github.com/azure/publish-security-assessments
Publish security assessments to Azure
| Name | Required | Description |
|---|---|---|
artifact-type |
Required | The artifact that was scanned. Supported values - “containerImage” Default: containerImage |
artifact-id |
Optional | Unique identifier for the artifact. For artifact-type “containerImage”, the action will take the image digest by default by using docker cli |
subscription-token |
Required | ASC subscription token which can be found on the ASC portal |
instrumentation-key |
Optional | Instrumentation key of the application insights instance |
connection-string |
Optional | Connection string of the application insights instance |
scan-provider |
Required | The tool used to scan the artifact. Supported values - "trivy" |
scan-results-path |
Required | Path to the file containing scan results |
name: 'Publish security assessments to Azure'
description: 'Publish security assessments to Azure'
inputs:
artifact-type:
description: 'The artifact that was scanned. Supported values - “containerImage”'
required: true
default: 'containerImage'
artifact-id:
description: 'Unique identifier for the artifact. For artifact-type “containerImage”, the action will take the image digest by default by using docker cli'
required: false
subscription-token:
description: 'ASC subscription token which can be found on the ASC portal'
required: true
instrumentation-key:
description: 'Instrumentation key of the application insights instance'
required: false
connection-string:
description: 'Connection string of the application insights instance'
required: false
scan-provider:
description: 'The tool used to scan the artifact. Supported values - "trivy"'
required: true
scan-results-path:
description: 'Path to the file containing scan results'
required: true
runs:
using: 'node12'
main: 'lib/run.js'
Action ID: marketplace/aws-actions/application-observability-for-aws
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/application-observability-for-aws
Support Agentic AI–driven performance investigation and resolution using live production telemetry.
| Name | Required | Description |
|---|---|---|
bot_name |
Optional | The bot name to respond to in comments (e.g., @awsapm) Default: @awsapm |
target_branch |
Optional | The branch to merge PRs into (defaults to repository default branch) |
branch_prefix |
Optional | Prefix for branches created by the action (e.g., 'awsapm/', 'awsapm-') Default: awsapm/ |
allowed_non_write_users |
Optional | Comma-separated list of GitHub usernames allowed to use this action even without write access. Empty string (default) means only users with write/admin access can use it. |
github_token |
Optional | GitHub token to use for API calls. If not provided, uses the default GitHub Actions token. Default: ${{ github.token }} |
custom_prompt |
Optional | Custom instructions to guide the AI agent's analysis and actions |
cli_tool |
Optional | CLI tool to use for investigation Default: claude_code |
output_file |
Optional | Path to AI agent execution output file (used to post results back to GitHub). For claude_code path, pass this from claude-code-base-action outputs in a second call to this action. |
output_status |
Optional | Execution status from AI agent (success or failure). Only used when output_file is provided. |
comment_id |
Optional | GitHub comment ID to update with results (required for Step 3 - pass from Step 1 output: steps.prepare.outputs.awsapm_comment_id) |
enable_cloudwatch_mcp |
Optional | Enable CloudWatch MCP server for metrics, alarms, and log insights Default: true |
test_mode |
Optional | Enable integration test mode (internal use only) Default: false |
| Name | Description |
|---|---|
prompt_file |
Path to the generated prompt file for claude-code-base-action |
mcp_config_file |
Path to the MCP servers configuration JSON file |
allowed_tools |
Comma-separated list of allowed tools |
awsapm_comment_id |
GitHub comment ID for tracking the investigation |
branch_name |
The branch created by Application observability for AWS Action for this execution |
github_token |
The GitHub token used by the action |
name: "Application observability for AWS"
description: "Support Agentic AI–driven performance investigation and resolution using live production telemetry."
branding:
icon: "cloud"
color: "orange"
inputs:
bot_name:
description: "The bot name to respond to in comments (e.g., @awsapm)"
required: false
default: "@awsapm"
target_branch:
description: "The branch to merge PRs into (defaults to repository default branch)"
required: false
branch_prefix:
description: "Prefix for branches created by the action (e.g., 'awsapm/', 'awsapm-')"
required: false
default: "awsapm/"
allowed_non_write_users:
description: "Comma-separated list of GitHub usernames allowed to use this action even without write access. Empty string (default) means only users with write/admin access can use it."
required: false
default: ""
github_token:
description: "GitHub token to use for API calls. If not provided, uses the default GitHub Actions token."
required: false
default: ${{ github.token }}
custom_prompt:
description: "Custom instructions to guide the AI agent's analysis and actions"
required: false
default: ""
cli_tool:
description: "CLI tool to use for investigation"
required: false
default: "claude_code"
output_file:
description: "Path to AI agent execution output file (used to post results back to GitHub). For claude_code path, pass this from claude-code-base-action outputs in a second call to this action."
required: false
output_status:
description: "Execution status from AI agent (success or failure). Only used when output_file is provided."
required: false
comment_id:
description: "GitHub comment ID to update with results (required for Step 3 - pass from Step 1 output: steps.prepare.outputs.awsapm_comment_id)"
required: false
enable_cloudwatch_mcp:
description: "Enable CloudWatch MCP server for metrics, alarms, and log insights"
required: false
default: "true"
test_mode:
description: "Enable integration test mode (internal use only)"
required: false
default: "false"
outputs:
# Preparation outputs for claude-code-base-action
prompt_file:
description: "Path to the generated prompt file for claude-code-base-action"
value: ${{ steps.prepare-claude.outputs.prompt_file }}
mcp_config_file:
description: "Path to the MCP servers configuration JSON file"
value: ${{ steps.prepare-claude.outputs.mcp_config_file }}
allowed_tools:
description: "Comma-separated list of allowed tools"
value: ${{ steps.prepare-claude.outputs.allowed_tools }}
# Common outputs
awsapm_comment_id:
description: "GitHub comment ID for tracking the investigation"
value: ${{ steps.init.outputs.awsapm_comment_id }}
branch_name:
description: "The branch created by Application observability for AWS Action for this execution"
value: ${{ steps.init.outputs.AWSAPM_BRANCH }}
github_token:
description: "The GitHub token used by the action"
value: ${{ steps.init.outputs.GITHUB_TOKEN }}
runs:
using: "composite"
steps:
- name: Validate test mode usage
if: inputs.test_mode == 'true'
shell: bash
run: |
if [[ "${{ github.repository }}" != "aws-actions/application-observability-for-aws" || "${{ github.ref_name }}" != "main" ]]; then
echo "::error::not authorized to run test mode in this repository"
exit 1
fi
- name: Install Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Dependencies
shell: bash
run: |
cd ${GITHUB_ACTION_PATH}
npm install
- name: Init action
id: init
if: inputs.output_file == '' # Skip init when posting results
shell: bash
run: |
node ${GITHUB_ACTION_PATH}/src/init.js
env:
CUSTOM_PROMPT: ${{ inputs.custom_prompt }}
BOT_NAME: ${{ inputs.bot_name }}
TARGET_BRANCH: ${{ inputs.target_branch }}
BRANCH_PREFIX: ${{ inputs.branch_prefix }}
OVERRIDE_GITHUB_TOKEN: ${{ inputs.github_token }}
ALLOWED_NON_WRITE_USERS: ${{ inputs.allowed_non_write_users }}
GITHUB_RUN_ID: ${{ github.run_id }}
DEFAULT_WORKFLOW_TOKEN: ${{ github.token }}
TEST_MODE: ${{ inputs.test_mode }}
# Validate CLI tool input
- name: Validate CLI Tool
id: validate-cli
if: inputs.output_file == '' && inputs.cli_tool != 'claude_code'
continue-on-error: true
shell: bash
run: |
# Create error output file
mkdir -p ${{ runner.temp }}/awsapm-output
ERROR_FILE="${{ runner.temp }}/awsapm-output/validation-error.txt"
cat > "$ERROR_FILE" << 'EOF'
❌ **Invalid CLI Tool**
The CLI tool `${{ inputs.cli_tool }}` is not supported.
**Supported tools:**
- `claude_code` (default)
Please update your workflow configuration to use `cli_tool: "claude_code"` or remove the `cli_tool` input to use the default.
EOF
echo "validation_error_file=$ERROR_FILE" >> $GITHUB_OUTPUT
echo "::error::Unsupported cli_tool: ${{ inputs.cli_tool }}. Only 'claude_code' is supported."
exit 1 # Fail the step but continue workflow to post error
# Install MCP Tools for claude-code-base-action
- name: Install MCP Tools
if: steps.init.outputs.contains_trigger == 'true' && steps.validate-cli.outcome == 'skipped'
shell: bash
run: |
set -e # Exit immediately if any command fails
# Install uv/uvx for MCP server execution
echo "Installing uv package manager for MCP server support..."
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
# Verify uvx installation
if ! command -v uvx &> /dev/null; then
echo "::error::uvx installation failed - MCP tools can not work"
exit 1
fi
echo "uvx installed successfully"
uvx --version
- name: Prepare Claude Configuration
id: prepare-claude
if: steps.init.outputs.contains_trigger == 'true' && steps.validate-cli.outcome == 'skipped'
shell: bash
run: |
cd ${GITHUB_ACTION_PATH}
node src/prepare-claude-config.js
env:
GITHUB_TOKEN: ${{ steps.init.outputs.GITHUB_TOKEN }}
ENABLE_CLOUDWATCH_MCP: ${{ inputs.enable_cloudwatch_mcp }}
INPUT_PROMPT_FILE: ${{ runner.temp }}/awsapm-prompts/awsapm-prompt.txt
OUTPUT_DIR: ${{ runner.temp }}/awsapm-prompts
# Post results (when output_file is provided OR validation failed)
- name: Post Investigation Results
if: |
inputs.output_file != '' ||
(steps.validate-cli.outcome == 'failure' && steps.init.outputs.awsapm_comment_id != '')
shell: bash
run: |
cd ${GITHUB_ACTION_PATH}
node src/post-result.js
env:
AWSAPM_COMMENT_ID: ${{ inputs.comment_id || steps.init.outputs.awsapm_comment_id }}
CLAUDE_EXECUTION_FILE: ${{ steps.validate-cli.outputs.validation_error_file || inputs.output_file }}
CLAUDE_CONCLUSION: ${{ steps.validate-cli.outcome == 'failure' && 'failure' || inputs.output_status }}
GITHUB_TOKEN: ${{ steps.init.outputs.GITHUB_TOKEN || inputs.github_token }}
REPOSITORY: ${{ github.repository }}
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_RUN_ID: ${{ github.run_id }}
TRIGGER_USERNAME: ${{ github.event.comment.user.login || github.event.issue.user.login || github.event.pull_request.user.login || github.event.sender.login || github.triggering_actor || github.actor || 'unknown' }}
# Fail the workflow if validation failed (after posting error to GitHub)
- name: Fail Workflow on Validation Error
if: steps.validate-cli.outcome == 'failure'
shell: bash
run: |
echo "::error::Workflow stopped due to validation error. See the posted comment for details."
exit 1
Action ID: marketplace/dflook/terraform-validate
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-validate
Validate a Terraform configuration directory
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the Terraform module to validate Default: . |
workspace |
Optional | Terraform workspace to use for the `terraform.workspace` value while validating. Note that for remote operations in a cloud backend, this is always `default`.
Also used for discovering the Terraform version to use, if not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. This is used for discovering the Terraform version to use, if not otherwise specified. See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace This is used for discovering the Terraform version to use, if not otherwise specified. See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details. |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure` because the validation failed, this will be set to 'validate-failed'. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a step when the validate fails. |
name: terraform-validate
description: Validate a Terraform configuration directory
author: Daniel Flook
inputs:
path:
description: The path to the Terraform module to validate
required: false
default: "."
workspace:
description: |
Terraform workspace to use for the `terraform.workspace` value while validating. Note that for remote operations in a cloud backend, this is always `default`.
Also used for discovering the Terraform version to use, if not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: "default"
backend_config:
description: |
List of Terraform backend config values, one per line.
This is used for discovering the Terraform version to use, if not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
This is used for discovering the Terraform version to use, if not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: ""
outputs:
failure-reason:
description: |
When the job outcome is `failure` because the validation failed, this will be set to 'validate-failed'.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a step when the validate fails.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/validate.sh
branding:
icon: globe
color: purple
Action ID: marketplace/brandedoutcast/publish-nuget
Author: R2 (@brandedoutcast)
Publisher: brandedoutcast
Repository: github.com/brandedoutcast/publish-nuget
Build, Pack & Publish a NuGet package with dotnet core on project version change
| Name | Required | Description |
|---|---|---|
PROJECT_FILE_PATH |
Required | Filepath of the project to be packaged, relative to root of repository |
BUILD_CONFIGURATION |
Optional | Configuration to build and package (default is Release) Default: Release |
BUILD_PLATFORM |
Optional | Platform target to compile (default is empty/AnyCPU) |
NUSPEC_FILE |
Optional | file path to nuspec file |
PACKAGE_NAME |
Optional | NuGet package id, used for version detection & defaults to project name |
VERSION_FILE_PATH |
Optional | Filepath with version info, relative to root of repository & defaults to PROJECT_FILE_PATH |
VERSION_REGEX |
Optional | Regex pattern to extract version info in a capturing group Default: ^\s*<(Package|)Version>(.*)<\/(Package|)Version>\s*$ |
VERSION_STATIC |
Optional | Useful with external providers like Nerdbank.GitVersioning, ignores VERSION_FILE_PATH & VERSION_REGEX |
TAG_COMMIT |
Optional | Flag to toggle git tagging, enabled by default Default: True |
TAG_FORMAT |
Optional | Format of the git tag, [*] gets replaced with actual version Default: v* |
NUGET_KEY |
Optional | API key to authenticate with NuGet server |
NUGET_SOURCE |
Optional | NuGet server uri hosting the packages, defaults to https://api.nuget.org Default: https://api.nuget.org |
INCLUDE_SYMBOLS |
Optional | Flag to toggle pushing symbols along with nuget package to the server, disabled by default |
| Name | Description |
|---|---|
VERSION |
Version of the associated git tag |
PACKAGE_NAME |
Name of the NuGet package generated |
PACKAGE_PATH |
Path to the generated NuGet package |
SYMBOLS_PACKAGE_NAME |
Name of the symbols package generated |
SYMBOLS_PACKAGE_PATH |
Path to the generated symbols package |
name: Publish NuGet
author: R2 (@brandedoutcast)
description: Build, Pack & Publish a NuGet package with dotnet core on project version change
inputs:
PROJECT_FILE_PATH:
description: Filepath of the project to be packaged, relative to root of repository
required: true
BUILD_CONFIGURATION:
description: Configuration to build and package (default is Release)
required: false
default: Release
BUILD_PLATFORM:
description: Platform target to compile (default is empty/AnyCPU)
required: false
default:
NUSPEC_FILE:
description: file path to nuspec file
required: false
default:
PACKAGE_NAME:
description: NuGet package id, used for version detection & defaults to project name
required: false
VERSION_FILE_PATH:
description: Filepath with version info, relative to root of repository & defaults to PROJECT_FILE_PATH
required: false
VERSION_REGEX:
description: Regex pattern to extract version info in a capturing group
required: false
default: ^\s*<(Package|)Version>(.*)<\/(Package|)Version>\s*$
VERSION_STATIC:
description: Useful with external providers like Nerdbank.GitVersioning, ignores VERSION_FILE_PATH & VERSION_REGEX
required: false
TAG_COMMIT:
description: Flag to toggle git tagging, enabled by default
required: false
default: true
TAG_FORMAT:
description: Format of the git tag, [*] gets replaced with actual version
required: false
default: v*
NUGET_KEY:
description: API key to authenticate with NuGet server
required: false
NUGET_SOURCE:
description: NuGet server uri hosting the packages, defaults to https://api.nuget.org
required: false
default: https://api.nuget.org
INCLUDE_SYMBOLS:
description: Flag to toggle pushing symbols along with nuget package to the server, disabled by default
required: false
default: false
outputs:
VERSION:
description: Version of the associated git tag
PACKAGE_NAME:
description: Name of the NuGet package generated
PACKAGE_PATH:
description: Path to the generated NuGet package
SYMBOLS_PACKAGE_NAME:
description: Name of the symbols package generated
SYMBOLS_PACKAGE_PATH:
description: Path to the generated symbols package
runs:
using: node12
main: index.js
branding:
icon: package
color: blue
Action ID: marketplace/azure/static-web-apps-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/static-web-apps-deploy
Deploys an application to Azure Static Web Apps
| Name | Required | Description |
|---|---|---|
action |
Required | Action to perform |
app_location |
Required | Directory location of the application source code |
azure_static_web_apps_api_token |
Required | Required token |
api_build_command |
Optional | Custom command for Oryx to run when building Azure Functions source code |
api_location |
Optional | Directory location of the Azure Functions source code |
app_artifact_location |
Optional | Identical in use to output_location |
output_location |
Optional | Directory location of the compiled application code after building |
app_build_command |
Optional | Custom command for Oryx to run when building application source code |
repo_token |
Optional | Token for interacting with the Github repository. Currently used only for commenting on Pull Requests. |
routes_location |
Optional | Directory location where the routes.json file can be found in the source code |
skip_app_build |
Optional | Skips the build step for the application source code if set to true. |
config_file_location |
Optional | Directory location where the staticwebapp.config.json file can be found in the source code |
skip_api_build |
Optional | Skips the build step for the Azure Functions source code if set to true. |
production_branch |
Optional | When specified, deployments from other branches will be considered preview environments. |
deployment_environment |
Optional | Name of the Azure Static Web Apps environment to deploy to. |
is_static_export |
Optional | Indicates you are using `next export` to generate a static site if set to true. |
data_api_location |
Optional | Directory location of the Data API configuration files |
github_id_token |
Optional | GitHub Auth id token |
| Name | Description |
|---|---|
static_web_app_url |
Url of the application |
name: "Azure Static Web Apps Deploy"
description: "Deploys an application to Azure Static Web Apps"
branding:
icon: "upload-cloud"
color: "blue"
inputs:
action:
description: "Action to perform"
required: true
app_location:
description: "Directory location of the application source code"
required: true
azure_static_web_apps_api_token:
description: "Required token"
required: true
api_build_command:
description: "Custom command for Oryx to run when building Azure Functions source code"
required: false
api_location:
description: "Directory location of the Azure Functions source code"
required: false
app_artifact_location:
description: "Identical in use to output_location"
required: false
output_location:
description: "Directory location of the compiled application code after building"
required: false
app_build_command:
description: "Custom command for Oryx to run when building application source code"
required: false
repo_token:
description: "Token for interacting with the Github repository. Currently used only for commenting on Pull Requests."
required: false
routes_location:
description: "Directory location where the routes.json file can be found in the source code"
required: false
skip_app_build:
description: "Skips the build step for the application source code if set to true."
required: false
config_file_location:
description: "Directory location where the staticwebapp.config.json file can be found in the source code"
required: false
skip_api_build:
description: "Skips the build step for the Azure Functions source code if set to true."
required: false
production_branch:
description: "When specified, deployments from other branches will be considered preview environments."
required: false
deployment_environment:
description: "Name of the Azure Static Web Apps environment to deploy to."
required: false
is_static_export:
description: "Indicates you are using `next export` to generate a static site if set to true."
required: false
data_api_location:
description: "Directory location of the Data API configuration files"
required: false
github_id_token:
description: "GitHub Auth id token"
required: false
outputs:
static_web_app_url:
description: "Url of the application"
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/mheap/github-action-heroku-logs
Author: Michael Heap
Publisher: mheap
Repository: github.com/mheap/github-action-heroku-logs
Attach Heroku build logs on deployment failure
name: 'Heroku Build Logs'
description: 'Attach Heroku build logs on deployment failure'
author: 'Michael Heap'
branding:
icon: 'mail'
color: 'orange'
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/docker/ghaction-dump-context
Author: crazy-max
Publisher: docker
Repository: github.com/docker/ghaction-dump-context
GitHub Action composite to dump context
# https://help.github.com/en/articles/metadata-syntax-for-github-actions
name: 'Dump context'
description: 'GitHub Action composite to dump context'
author: 'crazy-max'
branding:
color: 'yellow'
icon: 'activity'
runs:
using: "composite"
steps:
-
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
await core.group(`Env vars`, async () => {
const envs = Object.keys(process.env).sort().reduce(
(obj, key) => {
obj[key] = process.env[key];
return obj;
}, {}
);
core.info(JSON.stringify(Object.fromEntries(Object.entries(envs).filter(([key]) => !key.startsWith('GHACTION_DCTX_') && !key.startsWith('INPUT_'))), null, 2));
});
await core.group(`GitHub context`, async () => {
core.info(JSON.stringify(JSON.parse(`${process.env.GHACTION_DCTX_GITHUB_CONTEXT}`), null, 2));
});
await core.group(`Job context`, async () => {
core.info(JSON.stringify(JSON.parse(`${process.env.GHACTION_DCTX_JOB_CONTEXT}`), null, 2));
});
await core.group(`Steps context`, async () => {
core.info(JSON.stringify(JSON.parse(`${process.env.GHACTION_DCTX_STEPS_CONTEXT}`), null, 2));
});
await core.group(`Runner context`, async () => {
core.info(JSON.stringify(JSON.parse(`${process.env.GHACTION_DCTX_RUNNER_CONTEXT}`), null, 2));
});
await core.group(`Strategy context`, async () => {
core.info(JSON.stringify(JSON.parse(`${process.env.GHACTION_DCTX_STRATEGY_CONTEXT}`), null, 2));
});
await core.group(`Matrix context`, async () => {
core.info(JSON.stringify(JSON.parse(`${process.env.GHACTION_DCTX_MATRIX_CONTEXT}`), null, 2));
});
await core.group(`Docker info`, async () => {
await exec.exec('docker', ['info'], {ignoreReturnCode: true}).catch(error => {
core.info(error);
});
});
await core.group(`Docker version`, async () => {
await exec.exec('docker', ['version'], {ignoreReturnCode: true}).catch(error => {
core.info(error);
});
});
await core.group(`Docker images`, async () => {
await exec.exec('docker', ['image', 'ls'], {ignoreReturnCode: true}).catch(error => {
core.info(error);
});
});
if (`${process.env.RUNNER_OS}` == 'Linux') {
await core.group(`Check/install deps`, async () => {
const required_cmds = (`${process.env.RUNNER_ARCH}` == 'X64') ? ["lscgroup", "cpuid"] : ["lscgroup"];
try {
await exec.exec('which', required_cmds);
} catch (error) {
core.info("Installing deps...");
const sudo = await exec.getExecOutput('which sudo', [], {silent: true, ignoreReturnCode: true})
if (sudo.stdout != "") {
const aptget = await exec.getExecOutput('which apt-get', [], {silent: true, ignoreReturnCode: true})
if (aptget.stdout != "") {
await exec.exec('sudo', ['apt-get', 'update'], {ignoreReturnCode: true});
const required_deps = (`${process.env.RUNNER_ARCH}` == 'X64') ? ["cgroup-tools", "cpuid"] : ["cgroup-tools"];
await exec.exec('sudo', ['apt-get', 'install', '-y'].concat(required_deps), {ignoreReturnCode: true});
} else {
core.info('apt-get not found; not installing deps')
}
} else {
core.info('sudo not found; not installing deps')
}
}
});
await core.group(`Print hosts`, async () => {
await exec.exec('cat /etc/hosts');
});
await core.group(`Print resolv.conf`, async () => {
await exec.exec('cat /etc/resolv.conf');
});
await core.group(`Print cpuinfo`, async () => {
await exec.exec('cat /proc/cpuinfo');
});
await core.group(`Print cpuid`, async () => {
const cpuid = await exec.getExecOutput('which cpuid', [], {silent: true, ignoreReturnCode: true})
if (cpuid.stdout != "") {
await exec.exec('cpuid');
} else {
core.info('cpuid not found')
}
});
await core.group(`File system`, async () => {
await exec.exec('df -ah');
});
await core.group(`Mounts`, async () => {
await exec.exec('mount');
});
await core.group(`Systemd logs`, async () => {
const sudo = await exec.getExecOutput('which sudo', [], {silent: true, ignoreReturnCode: true})
if (sudo.stdout != "") {
await exec.exec('sudo journalctl --no-pager --system --user --all');
} else {
await exec.exec('journalctl --no-pager --system --user --all');
}
});
await core.group(`Docker daemon conf`, async () => {
if ((fs.statSync('/etc/docker', {throwIfNoEntry: false}) != undefined) &&
(fs.statSync('/etc/docker/daemon.json', {throwIfNoEntry: false}) != undefined)) {
core.info(JSON.stringify(JSON.parse(fs.readFileSync('/etc/docker/daemon.json', {encoding: 'utf-8'}).trim()), null, 2));
} else {
core.info('/etc/docker/daemon.json not present')
}
});
await core.group(`Cgroups`, async () => {
const lscgroup = await exec.getExecOutput('which lscgroup', [], {silent: true, ignoreReturnCode: true})
if (lscgroup.stdout != "") {
await exec.exec('lscgroup');
} else {
core.info('lscgroup not found')
}
});
await core.group(`containerd version`, async () => {
const containerd = await exec.getExecOutput('which containerd', [], {silent: true, ignoreReturnCode: true})
if (containerd.stdout != "") {
await exec.exec('containerd --version');
} else {
core.info('containerd not found')
}
});
}
env:
GHACTION_DCTX_GITHUB_CONTEXT: ${{ toJson(github) }}
GHACTION_DCTX_JOB_CONTEXT: ${{ toJson(job) }}
GHACTION_DCTX_STEPS_CONTEXT: ${{ toJson(steps) }}
GHACTION_DCTX_RUNNER_CONTEXT: ${{ toJson(runner) }}
GHACTION_DCTX_STRATEGY_CONTEXT: ${{ toJson(strategy) }}
GHACTION_DCTX_MATRIX_CONTEXT: ${{ toJson(matrix) }}
Action ID: marketplace/actions/typescript-action
Author: Your name or organization here
Publisher: actions
Repository: github.com/actions/typescript-action
Provide a description here
| Name | Required | Description |
|---|---|---|
milliseconds |
Required | Your input description here Default: 1000 |
| Name | Description |
|---|---|
time |
Your output description here |
name: The name of your action here
description: Provide a description here
author: Your name or organization here
# Add your action's branding here. This will appear on the GitHub Marketplace.
branding:
icon: heart
color: red
# Define your inputs here.
inputs:
milliseconds:
description: Your input description here
required: true
default: '1000'
# Define your outputs here.
outputs:
time:
description: Your output description here
runs:
using: node24
main: dist/index.js
Action ID: marketplace/bcanseco/github-contribution-graph-action
Author: Borja Canseco <borj@cans.eco>
Publisher: bcanseco
Repository: github.com/bcanseco/github-contribution-graph-action
This action will automatically push empty commits to one of your GitHub repos.
name: 'Autopopulate your contribution graph'
author: 'Borja Canseco <borj@cans.eco>'
description: 'This action will automatically push empty commits to one of your GitHub repos.'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'grid'
color: 'green'
Action ID: marketplace/dineshsonachalam/github-action
Author: Unknown
Publisher: dineshsonachalam
Repository: github.com/dineshsonachalam/github-action
Generate your API doc and an automatic changelog from any OpenAPI or AsyncAPI file.
| Name | Required | Description |
|---|---|---|
file |
Required | Relative path to the documentation file Default: api-contract.yml |
doc |
Required | Documentation id. Can be found in the documentation settings on https://bump.sh |
token |
Required | Documentation token. Can be found in the documentation settings on https://bump.sh |
command |
Optional | Bump command: deploy/dry-run/preview Default: deploy |
name: 'API documentation on Bump'
description: 'Generate your API doc and an automatic changelog from any OpenAPI or AsyncAPI file.'
inputs:
file:
description: 'Relative path to the documentation file'
required: true
default: api-contract.yml
doc:
description: 'Documentation id. Can be found in the documentation settings on https://bump.sh'
required: true
token:
description: 'Documentation token. Can be found in the documentation settings on https://bump.sh'
required: true
command:
description: 'Bump command: deploy/dry-run/preview'
default: deploy
runs:
using: 'node12'
main: 'dist/index.js'
branding:
color: gray-dark
icon: book-open
Action ID: marketplace/azure/issue-sentinel
Author: Unknown
Publisher: azure
Repository: github.com/azure/issue-sentinel
Get similar issues by Issue Sentinel
| Name | Required | Description |
|---|---|---|
github-token |
Optional | The GitHub token used to create an authenticated client Default: ${{ github.token }} |
enable-similar-issues-scanning |
Optional | Enable similar issues scanning Default: true |
enable-security-issues-scanning |
Optional | Enable security issues scanning Default: false |
enable-ux-tag |
Optional | Enable UX tag Default: false |
name: 'Issue Sentinel'
description: 'Get similar issues by Issue Sentinel'
inputs:
github-token:
description: 'The GitHub token used to create an authenticated client'
default: ${{ github.token }}
required: false
enable-similar-issues-scanning:
description: 'Enable similar issues scanning'
default: 'true'
required: false
enable-security-issues-scanning:
description: 'Enable security issues scanning'
default: 'false'
required: false
enable-ux-tag:
description: 'Enable UX tag'
default: 'false'
required: false
branding:
icon: 'shield'
color: 'blue'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/amirisback/frogo-admob
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/frogo-admob
Admob Helper for your android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'FrogoAdmob'
description: 'Admob Helper for your android apps'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/lukka/github-script
Author: GitHub
Publisher: lukka
Repository: github.com/lukka/github-script
Run simple scripts using the GitHub client
| Name | Required | Description |
|---|---|---|
script |
Required | The path to the file containing the script to run |
github-token |
Optional | The GitHub token used to create an authenticated client Default: ${{ github.token }} |
debug |
Optional | Whether to tell the GitHub client to log details of its requests |
user-agent |
Optional | An optional user-agent string Default: actions/github-script |
previews |
Optional | A comma-separated list of API previews to accept |
result-encoding |
Optional | Either "string" or "json" (default "json")—how the result will be encoded Default: json |
| Name | Description |
|---|---|
result |
The return value of the script, stringified with `JSON.stringify` |
name: GitHub Script
author: GitHub
description: Run simple scripts using the GitHub client
branding:
color: blue
icon: code
inputs:
script:
description: The path to the file containing the script to run
required: true
github-token:
description: The GitHub token used to create an authenticated client
default: ${{ github.token }}
required: false
debug:
description: Whether to tell the GitHub client to log details of its requests
default: false
user-agent:
description: An optional user-agent string
default: actions/github-script
previews:
description: A comma-separated list of API previews to accept
result-encoding:
description: Either "string" or "json" (default "json")—how the result will be encoded
default: json
outputs:
result:
description: The return value of the script, stringified with `JSON.stringify`
runs:
using: node12
main: dist/index.js
Action ID: marketplace/pascalgn/automerge-action
Author: Unknown
Publisher: pascalgn
Repository: github.com/pascalgn/automerge-action
Automatically merge pull requests that are ready
name: "Merge pull requests (automerge-action)"
description: "Automatically merge pull requests that are ready"
runs:
using: "node20"
main: "dist/index.js"
branding:
icon: "git-pull-request"
color: "blue"
Action ID: marketplace/amirisback/kick-start-android-library
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/kick-start-android-library
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Kick-Start-Library'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/google-github-actions/get-gke-credentials
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/get-gke-credentials
Generate ephemeral credentials for authenticating to Google Kubernetes Engine (GKE) and kubectl, helm, etc.
| Name | Required | Description |
|---|---|---|
cluster_name |
Required | Name of the cluster for which to get credentials. This can be specified as a full resource name: projects/<project>/locations/<location>/clusters/<cluster> In which case the `project_id` and `location` inputs are optional. If only specified as a name: <cluster> then both the `project_id` and `location` may be required. |
location |
Optional | Location (region or zone) in which the cluster resides. This value is required unless `cluster_name` is a full resource name. |
project_id |
Optional | Project ID where the cluster is deployed. If provided, this will override the project configured by previous steps or environment variables. If not provided, the project will be inferred from the environment, best-effort. |
context_name |
Optional | Name to use when creating the `kubectl` context. If not specified, the default value is `gke_<project>_<location>_<cluster>`. |
namespace |
Optional | Name of the Kubernetes namespace to use within the context. |
use_auth_provider |
Optional | Set this to true to use the Google Cloud auth plugin in `kubectl` instead
of inserting a short-lived access token. Default: false |
use_internal_ip |
Optional | Set this to true to use the internal IP address for the cluster endpoint.
This is mostly used with private GKE clusters. Default: false |
use_connect_gateway |
Optional | Set this to true to use the [Connect Gateway
endpoint](https://cloud.google.com/anthos/multicluster-management/gateway)
to connect to cluster. Default: false |
use_dns_based_endpoint |
Optional | Set this true to use [DNS-based endpoint](https://cloud.google.com/kubernetes-engine/docs/concepts/network-isolation#dns-based_endpoint)
to connect to the cluster. Default: false |
fleet_membership_name |
Optional | Fleet membership name to use for generating Connect Gateway endpoint, of the form: projects/<project>/locations/<location>/memberships/<membership> This only applies if `use_connect_gateway` is true. Defaults to auto discovery if empty. |
quota_project_id |
Optional | Project ID from which to pull quota. The caller must have `serviceusage.services.use` permission on the project. If unspecified, this defaults to the project of the authenticated principle. This is an advanced setting, most users should leave this blank. |
| Name | Description |
|---|---|
kubeconfig_path |
Path on the local filesystem where the generated Kubernetes configuration file resides. |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Get GKE Credentials'
author: 'Google LLC'
description: |-
Generate ephemeral credentials for authenticating to Google Kubernetes Engine
(GKE) and kubectl, helm, etc.
inputs:
cluster_name:
description: |-
Name of the cluster for which to get credentials. This can be specified as
a full resource name:
projects/<project>/locations/<location>/clusters/<cluster>
In which case the `project_id` and `location` inputs are optional. If only
specified as a name:
<cluster>
then both the `project_id` and `location` may be required.
required: true
location:
description: |-
Location (region or zone) in which the cluster resides. This value is
required unless `cluster_name` is a full resource name.
required: false
project_id:
description: |-
Project ID where the cluster is deployed. If provided, this will override
the project configured by previous steps or environment variables. If not
provided, the project will be inferred from the environment, best-effort.
required: false
context_name:
description: |-
Name to use when creating the `kubectl` context. If not specified, the
default value is `gke_<project>_<location>_<cluster>`.
required: false
namespace:
description: |-
Name of the Kubernetes namespace to use within the context.
required: false
use_auth_provider:
description: |-
Set this to true to use the Google Cloud auth plugin in `kubectl` instead
of inserting a short-lived access token.
default: 'false'
required: false
use_internal_ip:
description: |-
Set this to true to use the internal IP address for the cluster endpoint.
This is mostly used with private GKE clusters.
default: 'false'
required: false
use_connect_gateway:
description: |-
Set this to true to use the [Connect Gateway
endpoint](https://cloud.google.com/anthos/multicluster-management/gateway)
to connect to cluster.
default: 'false'
required: false
use_dns_based_endpoint:
description: |-
Set this true to use [DNS-based endpoint](https://cloud.google.com/kubernetes-engine/docs/concepts/network-isolation#dns-based_endpoint)
to connect to the cluster.
default: 'false'
required: false
fleet_membership_name:
description: |-
Fleet membership name to use for generating Connect Gateway endpoint, of
the form:
projects/<project>/locations/<location>/memberships/<membership>
This only applies if `use_connect_gateway` is true. Defaults to auto
discovery if empty.
required: false
quota_project_id:
description: |-
Project ID from which to pull quota. The caller must have
`serviceusage.services.use` permission on the project. If unspecified,
this defaults to the project of the authenticated principle. This is an
advanced setting, most users should leave this blank.
required: false
outputs:
kubeconfig_path:
description: |-
Path on the local filesystem where the generated Kubernetes configuration
file resides.
branding:
icon: 'lock'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/amirisback/frogo-consume-api
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/frogo-consume-api
Consume API for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Consume-Api'
description: 'Consume API for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/manage-azure-policy
Author: Unknown
Publisher: azure
Repository: github.com/azure/manage-azure-policy
Create or update Azure policies from your GitHub Workflows using Manage Azure Policy action.
| Name | Required | Description |
|---|---|---|
paths |
Required | mandatory. The path(s) to the directory that contains Azure policy files. The files present only in these directories will be considered by this action for updating policies in Azure. You can use wild card characters as mentioned * or ** for specifying sub folders in a path. For more details on the use of the wild cards check [glob wildcard patterns](https://github.com/isaacs/node-glob#glob-primer). Note that a definition file should be named as policy.json and assignment filenames should start with assign keyword. |
ignore-paths |
Optional | Optional. These are the directory paths that will be ignored by the action. If you have a specific policy folder that is not ready to be applied yet, specify the path here. Note that ignore-paths has a higher precedence compared to paths parameter. |
assignments |
Optional | Optional. These are policy assignment files that would be considered by the action. This parameter is especially useful if you want to apply only those assignments that correspond to a specific environment for following a safe deployment practice. E.g. _assign.AllowedVMSKUs-dev-rg.json_. You can use wild card character * to match multiple file names. E.g. _assign.\*dev\*.json_. If this parameter is not specified, the action will consider all assignment files that are present in the directories mentioned in paths parameter. |
mode |
Optional | Optional. There are 2 modes for this action - _incremental_ and _complete_. If not specified, the action will use incremental mode by default. In incremental mode, the action will compare already exisiting policy in azure with the contents of policy provided in repository file. It will apply the policy only if there is a mismatch. On the contrary, the complete mode will apply all the files present in the specified paths irrespective of whether or not repository policy file has been updated. |
enforce |
Optional | Optional. To override the property enforcementMode in assignments. Input is similar to assignments input. Add ~ at the beginning if you do not want to enforce the assignment(s) |
name: 'Manage Azure Policy'
description: 'Create or update Azure policies from your GitHub Workflows using Manage Azure Policy action.'
inputs:
paths:
description: 'mandatory. The path(s) to the directory that contains Azure policy files. The files present only in these directories will be considered by this action for updating policies in Azure. You can use wild card characters as mentioned * or ** for specifying sub folders in a path. For more details on the use of the wild cards check [glob wildcard patterns](https://github.com/isaacs/node-glob#glob-primer). Note that a definition file should be named as policy.json and assignment filenames should start with assign keyword.'
required: true
ignore-paths:
description: 'Optional. These are the directory paths that will be ignored by the action. If you have a specific policy folder that is not ready to be applied yet, specify the path here. Note that ignore-paths has a higher precedence compared to paths parameter.'
required: false
assignments:
description: 'Optional. These are policy assignment files that would be considered by the action. This parameter is especially useful if you want to apply only those assignments that correspond to a specific environment for following a safe deployment practice. E.g. _assign.AllowedVMSKUs-dev-rg.json_. You can use wild card character * to match multiple file names. E.g. _assign.\*dev\*.json_. If this parameter is not specified, the action will consider all assignment files that are present in the directories mentioned in paths parameter.'
required: false
mode:
required: false
description: 'Optional. There are 2 modes for this action - _incremental_ and _complete_. If not specified, the action will use incremental mode by default. In incremental mode, the action will compare already exisiting policy in azure with the contents of policy provided in repository file. It will apply the policy only if there is a mismatch. On the contrary, the complete mode will apply all the files present in the specified paths irrespective of whether or not repository policy file has been updated.'
enforce:
required: false
description: 'Optional. To override the property enforcementMode in assignments. Input is similar to assignments input. Add ~ at the beginning if you do not want to enforce the assignment(s)'
runs:
using: 'node16'
main: 'lib/run.js'
Action ID: marketplace/github/dependabot-action
Author: GitHub
Publisher: github
Repository: github.com/github/dependabot-action
Runs dependabot-updater in Actions
name: 'Updater Action'
description: 'Runs dependabot-updater in Actions'
author: 'GitHub'
runs:
using: 'node20'
main: 'dist/main/index.js'
post: 'dist/cleanup/index.js'
Action ID: marketplace/amirisback/android-coach-mark
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-coach-mark
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/actions/gradle-build-tools-actions
Author: Unknown
Publisher: actions
Repository: github.com/actions/gradle-build-tools-actions
A collection of actions for building Gradle projects, as well as generating a dependency graph via Dependency Submission.
name: Build with Gradle
description: A collection of actions for building Gradle projects, as well as generating a dependency graph via Dependency Submission.
runs:
using: "composite"
steps:
- run: |
echo "::error::The path 'gradle/actions' is not a valid action. Please use 'gradle/actions/setup-gradle' or 'gradle/actions/dependency-submission'."
exit 1
shell: bash
branding:
icon: 'box'
color: 'gray-dark'
Action ID: marketplace/dflook/terraform-unlock-state
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-unlock-state
Force unlocks a Terraform remote state.
| Name | Required | Description |
|---|---|---|
path |
Optional | Path to the Terraform root module that defines the remote state to unlock Default: . |
workspace |
Optional | Terraform workspace to unlock the remote state for Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
lock_id |
Required | The ID of the state lock to release. |
name: terraform-unlock-state
description: Force unlocks a Terraform remote state.
author: Daniel Flook
inputs:
path:
description: Path to the Terraform root module that defines the remote state to unlock
required: false
default: "."
workspace:
description: Terraform workspace to unlock the remote state for
required: false
default: "default"
backend_config:
description: List of Terraform backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
lock_id:
description: The ID of the state lock to release.
required: true
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/unlock-state.sh
branding:
icon: globe
color: purple
Action ID: marketplace/amirisback/quran-android
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/quran-android
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Quran Android'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/android-widget-with-api
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/android-widget-with-api
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'FrogoKickStartAndroid'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/gradle/gradle-build-action
Author: Unknown
Publisher: gradle
Repository: github.com/gradle/gradle-build-action
Configures Gradle for GitHub actions, caching state and generating a dependency graph via Dependency Submission.
| Name | Required | Description |
|---|---|---|
gradle-version |
Optional | Gradle version to use. If specified, this Gradle version will be downloaded, added to the PATH and used for invoking Gradle. If not provided, it is assumed that the project uses the Gradle Wrapper. |
cache-disabled |
Optional | When 'true', all caching is disabled. No entries will be written to or read from the cache. |
cache-read-only |
Optional | When 'true', existing entries will be read from the cache but no entries will be written.
By default this value is 'false' for workflows on the GitHub default branch and 'true' for workflows on other branches.
Default: ${{ github.event.repository != null && github.ref_name != github.event.repository.default_branch }} |
cache-write-only |
Optional | When 'true', entries will not be restored from the cache but will be saved at the end of the Job. Setting this to 'true' implies cache-read-only will be 'false'. |
cache-overwrite-existing |
Optional | When 'true', a pre-existing Gradle User Home will not prevent the cache from being restored. |
cache-encryption-key |
Optional | A base64 encoded AES key used to encrypt the configuration-cache data. The key is exported as 'GRADLE_ENCRYPTION_KEY' for later steps. A suitable key can be generated with `openssl rand -base64 16`. Configuration-cache data will not be saved/restored without an encryption key being provided. |
gradle-home-cache-includes |
Optional | Paths within Gradle User Home to cache. Default: caches
notifications
|
gradle-home-cache-excludes |
Optional | Paths within Gradle User Home to exclude from cache. |
gradle-home-cache-cleanup |
Optional | When 'true', the action will attempt to remove any stale/unused entries from the Gradle User Home prior to saving to the GitHub Actions cache. |
add-job-summary |
Optional | Specifies when a Job Summary should be inluded in the action results. Valid values are 'never', 'always' (default), and 'on-failure'. Default: always |
add-job-summary-as-pr-comment |
Optional | Specifies when each Job Summary should be added as a PR comment. Valid values are 'never' (default), 'always', and 'on-failure'. No action will be taken if the workflow was not triggered from a pull request. Default: never |
dependency-graph |
Optional | Specifies if a GitHub dependency snapshot should be generated for each Gradle build, and if so, how.
Valid values are 'disabled' (default), 'generate', 'generate-and-submit', 'generate-and-upload', 'download-and-submit' and 'clear'.
Default: disabled |
dependency-graph-continue-on-failure |
Optional | When 'false' a failure to generate or submit a dependency graph will fail the Step or Job. When 'true' a warning will be emitted but no failure will result. Default: True |
artifact-retention-days |
Optional | Specifies the number of days to retain any artifacts generated by the action. If not set, the default retention settings for the repository will apply. |
build-scan-publish |
Optional | Set to 'true' to automatically publish build results as a Build Scan on scans.gradle.com. For publication to succeed without user input, you must also provide values for `build-scan-terms-of-use-url` and 'build-scan-terms-of-use-agree'. |
build-scan-terms-of-use-url |
Optional | The URL to the Build Scan® terms of use. This input must be set to 'https://gradle.com/terms-of-service' or 'https://gradle.com/help/legal-terms-of-use'. |
build-scan-terms-of-use-agree |
Optional | Indicate that you agree to the Build Scan® terms of use. This input value must be "yes". |
develocity-access-key |
Optional | Develocity access key. Should be set to a secret containing the Develocity Access key. |
develocity-token-expiry |
Optional | The Develocity short-lived access tokens expiry in hours. Default is 2 hours. |
develocity-injection-enabled |
Optional | Enables Develocity injection. |
develocity-url |
Optional | The URL for the Develocity server. |
develocity-allow-untrusted-server |
Optional | Allow communication with an untrusted server; set to _true_ if your Develocity instance is using a self-signed. |
develocity-capture-file-fingerprints |
Optional | Enables capturing the paths and content hashes of each individual input file. |
develocity-enforce-url |
Optional | Enforce the configured Develocity URL over a URL configured in the project's build; set to _true_ to enforce publication of build scans to the configured Develocity URL. |
develocity-plugin-version |
Optional | The version of the Develocity Gradle plugin to apply. |
develocity-ccud-plugin-version |
Optional | The version of the Common Custom User Data Gradle plugin to apply, if any. |
gradle-plugin-repository-url |
Optional | The URL of the repository to use when resolving the Develocity and CCUD plugins; the Gradle Plugin Portal is used by default. |
gradle-plugin-repository-username |
Optional | The username for the repository URL to use when resolving the Develocity and CCUD. |
gradle-plugin-repository-password |
Optional | The password for the repository URL to use when resolving the Develocity and CCUD plugins; Consider using secrets to pass the value to this variable. |
validate-wrappers |
Optional | When 'true', the action will perform the 'wrapper-validation' action automatically. If the wrapper checksums are not valid, the action will fail. |
build-scan-terms-of-service-url |
Optional | The URL to the Build Scan® terms of use. This input must be set to 'https://gradle.com/terms-of-service'. |
build-scan-terms-of-service-agree |
Optional | Indicate that you agree to the Build Scan® terms of use. This input value must be "yes". |
generate-job-summary |
Optional | When 'false', no Job Summary will be generated for the Job. Default: True |
arguments |
Optional | Gradle command line arguments (supports multi-line input) |
build-root-directory |
Optional | Path to the root directory of the build. Default is the root of the GitHub workspace. |
gradle-home-cache-strict-match |
Optional | When 'true', the action will not attempt to restore the Gradle User Home entries from other Jobs. |
workflow-job-context |
Optional | Used to uniquely identify the current job invocation. Defaults to the matrix values for this job; this should not be overridden by users (INTERNAL). Default: ${{ toJSON(matrix) }} |
github-token |
Optional | The GitHub token used to authenticate when submitting via the Dependency Submission API. Default: ${{ github.token }} |
| Name | Description |
|---|---|
build-scan-url |
Link to the Build Scan® generated by a Gradle build. Note that this output applies to a Step executing Gradle, not to the `setup-gradle` Step itself. |
dependency-graph-file |
Path to the GitHub Dependency Graph snapshot file generated by a Gradle build. Note that this output applies to a Step executing Gradle, not to the `setup-gradle` Step itself. |
gradle-version |
Version of Gradle that was setup by the action |
name: setup-gradle
description: 'Configures Gradle for GitHub actions, caching state and generating a dependency graph via Dependency Submission.'
inputs:
gradle-version:
description: |
Gradle version to use. If specified, this Gradle version will be downloaded, added to the PATH and used for invoking Gradle.
If not provided, it is assumed that the project uses the Gradle Wrapper.
required: false
# Cache configuration
cache-disabled:
description: When 'true', all caching is disabled. No entries will be written to or read from the cache.
required: false
default: false
cache-read-only:
description: |
When 'true', existing entries will be read from the cache but no entries will be written.
By default this value is 'false' for workflows on the GitHub default branch and 'true' for workflows on other branches.
required: false
default: ${{ github.event.repository != null && github.ref_name != github.event.repository.default_branch }}
cache-write-only:
description: |
When 'true', entries will not be restored from the cache but will be saved at the end of the Job.
Setting this to 'true' implies cache-read-only will be 'false'.
required: false
default: false
cache-overwrite-existing:
description: When 'true', a pre-existing Gradle User Home will not prevent the cache from being restored.
required: false
default: false
cache-encryption-key:
description: |
A base64 encoded AES key used to encrypt the configuration-cache data. The key is exported as 'GRADLE_ENCRYPTION_KEY' for later steps.
A suitable key can be generated with `openssl rand -base64 16`.
Configuration-cache data will not be saved/restored without an encryption key being provided.
required: false
gradle-home-cache-includes:
description: Paths within Gradle User Home to cache.
required: false
default: |
caches
notifications
gradle-home-cache-excludes:
description: Paths within Gradle User Home to exclude from cache.
required: false
gradle-home-cache-cleanup:
description: When 'true', the action will attempt to remove any stale/unused entries from the Gradle User Home prior to saving to the GitHub Actions cache.
required: false
default: false
# Job summary configuration
add-job-summary:
description: Specifies when a Job Summary should be inluded in the action results. Valid values are 'never', 'always' (default), and 'on-failure'.
required: false
default: 'always'
add-job-summary-as-pr-comment:
description: Specifies when each Job Summary should be added as a PR comment. Valid values are 'never' (default), 'always', and 'on-failure'. No action will be taken if the workflow was not triggered from a pull request.
required: false
default: 'never'
# Dependency Graph configuration
dependency-graph:
description: |
Specifies if a GitHub dependency snapshot should be generated for each Gradle build, and if so, how.
Valid values are 'disabled' (default), 'generate', 'generate-and-submit', 'generate-and-upload', 'download-and-submit' and 'clear'.
required: false
default: 'disabled'
dependency-graph-continue-on-failure:
description: When 'false' a failure to generate or submit a dependency graph will fail the Step or Job. When 'true' a warning will be emitted but no failure will result.
required: false
default: true
artifact-retention-days:
description: Specifies the number of days to retain any artifacts generated by the action. If not set, the default retention settings for the repository will apply.
required: false
# Build Scan configuration
build-scan-publish:
description: |
Set to 'true' to automatically publish build results as a Build Scan on scans.gradle.com.
For publication to succeed without user input, you must also provide values for `build-scan-terms-of-use-url` and 'build-scan-terms-of-use-agree'.
required: false
default: false
build-scan-terms-of-use-url:
description: The URL to the Build Scan® terms of use. This input must be set to 'https://gradle.com/terms-of-service' or 'https://gradle.com/help/legal-terms-of-use'.
required: false
build-scan-terms-of-use-agree:
description: Indicate that you agree to the Build Scan® terms of use. This input value must be "yes".
required: false
develocity-access-key:
description: Develocity access key. Should be set to a secret containing the Develocity Access key.
required: false
develocity-token-expiry:
description: The Develocity short-lived access tokens expiry in hours. Default is 2 hours.
required: false
develocity-injection-enabled:
description: Enables Develocity injection.
required: false
develocity-url:
description: The URL for the Develocity server.
required: false
develocity-allow-untrusted-server:
description: Allow communication with an untrusted server; set to _true_ if your Develocity instance is using a self-signed.
required: false
develocity-capture-file-fingerprints:
description: Enables capturing the paths and content hashes of each individual input file.
required: false
develocity-enforce-url:
description: Enforce the configured Develocity URL over a URL configured in the project's build; set to _true_ to enforce publication of build scans to the configured Develocity URL.
required: false
develocity-plugin-version:
description: The version of the Develocity Gradle plugin to apply.
required: false
develocity-ccud-plugin-version:
description: The version of the Common Custom User Data Gradle plugin to apply, if any.
required: false
gradle-plugin-repository-url:
description: The URL of the repository to use when resolving the Develocity and CCUD plugins; the Gradle Plugin Portal is used by default.
required: false
gradle-plugin-repository-username:
description: The username for the repository URL to use when resolving the Develocity and CCUD.
required: false
gradle-plugin-repository-password:
description: The password for the repository URL to use when resolving the Develocity and CCUD plugins; Consider using secrets to pass the value to this variable.
required: false
# Wrapper validation configuration
validate-wrappers:
description: |
When 'true', the action will perform the 'wrapper-validation' action automatically.
If the wrapper checksums are not valid, the action will fail.
required: false
default: false
# DEPRECATED ACTION INPUTS
build-scan-terms-of-service-url:
description: The URL to the Build Scan® terms of use. This input must be set to 'https://gradle.com/terms-of-service'.
required: false
deprecation-message: The input has been renamed to align with the Develocity API. Use 'build-scan-terms-of-use-url' instead.
build-scan-terms-of-service-agree:
description: Indicate that you agree to the Build Scan® terms of use. This input value must be "yes".
required: false
deprecation-message: The input has been renamed to align with the Develocity API. Use 'build-scan-terms-of-use-agree' instead.
generate-job-summary:
description: When 'false', no Job Summary will be generated for the Job.
required: false
default: true
deprecation-message: Superceded by the new 'add-job-summary' and 'add-job-summary-as-pr-comment' parameters.
arguments:
description: Gradle command line arguments (supports multi-line input)
required: false
deprecation-message: Using the action to execute Gradle directly is deprecated in favor of using the action to setup Gradle, and executing Gradle in a subsequent Step.
build-root-directory:
description: Path to the root directory of the build. Default is the root of the GitHub workspace.
required: false
deprecation-message: Using the action to execute Gradle directly is deprecated in favor of using the action to setup Gradle, and executing Gradle in a subsequent Step.
# EXPERIMENTAL ACTION INPUTS
# The following action properties allow fine-grained tweaking of the action caching behaviour.
# These properties are experimental and not (yet) designed for production use, and may change without notice in a subsequent release of `setup-gradle`.
# Use at your own risk!
gradle-home-cache-strict-match:
description: When 'true', the action will not attempt to restore the Gradle User Home entries from other Jobs.
required: false
default: false
# INTERNAL ACTION INPUTS
# These inputs should not be configured directly, and are only used to pass environmental information to the action
workflow-job-context:
description: Used to uniquely identify the current job invocation. Defaults to the matrix values for this job; this should not be overridden by users (INTERNAL).
required: false
default: ${{ toJSON(matrix) }}
github-token:
description: The GitHub token used to authenticate when submitting via the Dependency Submission API.
default: ${{ github.token }}
required: false
outputs:
build-scan-url:
description: Link to the Build Scan® generated by a Gradle build. Note that this output applies to a Step executing Gradle, not to the `setup-gradle` Step itself.
value: ${{ steps.setup-gradle.outputs.build-scan-url }}
dependency-graph-file:
description: Path to the GitHub Dependency Graph snapshot file generated by a Gradle build. Note that this output applies to a Step executing Gradle, not to the `setup-gradle` Step itself.
value: ${{ steps.setup-gradle.outputs.dependency-graph-file }}
gradle-version:
description: Version of Gradle that was setup by the action
value: ${{ steps.setup-gradle.outputs.gradle-version }}
runs:
using: "composite"
steps:
- name: Setup Gradle
id: setup-gradle
uses: gradle/actions/setup-gradle@v3.5.0
with:
gradle-version: ${{ inputs.gradle-version }}
cache-disabled: ${{ inputs.cache-disabled }}
cache-read-only: ${{ inputs.cache-read-only }}
cache-write-only: ${{ inputs.cache-write-only }}
cache-overwrite-existing: ${{ inputs.cache-overwrite-existing }}
cache-encryption-key: ${{ inputs.cache-encryption-key }}
gradle-home-cache-includes: ${{ inputs.gradle-home-cache-includes }}
gradle-home-cache-excludes: ${{ inputs.gradle-home-cache-excludes }}
gradle-home-cache-cleanup: ${{ inputs.gradle-home-cache-cleanup }}
add-job-summary: ${{ inputs.add-job-summary }}
add-job-summary-as-pr-comment: ${{ inputs.add-job-summary-as-pr-comment }}
dependency-graph: ${{ inputs.dependency-graph }}
dependency-graph-continue-on-failure: ${{ inputs.dependency-graph-continue-on-failure }}
artifact-retention-days: ${{ inputs.artifact-retention-days }}
build-scan-publish: ${{ inputs.build-scan-publish }}
build-scan-terms-of-use-url: ${{ inputs.build-scan-terms-of-use-url }}
build-scan-terms-of-use-agree: ${{ inputs.build-scan-terms-of-use-agree }}
validate-wrappers: ${{ inputs.validate-wrappers }}
build-scan-terms-of-service-url: ${{ inputs.build-scan-terms-of-service-url }}
build-scan-terms-of-service-agree: ${{ inputs.build-scan-terms-of-service-agree }}
generate-job-summary: ${{ inputs.generate-job-summary }}
arguments: ${{ inputs.arguments }}
build-root-directory: ${{ inputs.build-root-directory }}
gradle-home-cache-strict-match: ${{ inputs.gradle-home-cache-strict-match }}
workflow-job-context: ${{ inputs.workflow-job-context }}
github-token: ${{ inputs.github-token }}
develocity-access-key: ${{ inputs.develocity-access-key }}
develocity-token-expiry: ${{ inputs.develocity-token-expiry }}
develocity-injection-enabled: ${{ inputs.develocity-injection-enabled }}
develocity-url: ${{ inputs.develocity-url }}
develocity-allow-untrusted-server: ${{ inputs.develocity-allow-untrusted-server }}
develocity-capture-file-fingerprints: ${{ inputs.develocity-capture-file-fingerprints }}
develocity-enforce-url: ${{ inputs.develocity-enforce-url }}
develocity-plugin-version: ${{ inputs.develocity-plugin-version }}
develocity-ccud-plugin-version: ${{ inputs.develocity-ccud-plugin-version }}
gradle-plugin-repository-url: ${{ inputs.gradle-plugin-repository-url }}
gradle-plugin-repository-username: ${{ inputs.gradle-plugin-repository-username }}
gradle-plugin-repository-password: ${{ inputs.gradle-plugin-repository-password }}
env:
GRADLE_ACTION_ID: gradle/gradle-build-action
branding:
icon: 'box'
color: 'gray-dark'
Action ID: marketplace/appleboy/scp-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/scp-action
Easily transfer files and folders using the SCP command in Linux.
| Name | Required | Description |
|---|---|---|
host |
Optional | Remote host address for SCP (e.g., example.com or 192.168.1.1). |
port |
Optional | Remote SSH port for SCP. Default: 22. Default: 22 |
username |
Optional | Username for SSH authentication. |
password |
Optional | Password for SSH authentication (not recommended; use SSH keys if possible). |
protocol |
Optional | IP protocol to use. Valid values: 'tcp', 'tcp4', or 'tcp6'. Default: tcp. Default: tcp |
timeout |
Optional | Timeout for establishing SSH connection to the remote host. Default: 30s. Default: 30s |
command_timeout |
Optional | Timeout for the SCP command execution. Default: 10m. Default: 10m |
key |
Optional | Content of the SSH private key (e.g., the raw content of ~/.ssh/id_rsa). |
key_path |
Optional | Path to the SSH private key file. |
passphrase |
Optional | Passphrase for the SSH private key, if required. |
fingerprint |
Optional | SHA256 fingerprint of the host's public key. If not set, host key verification is skipped (not recommended for production). |
use_insecure_cipher |
Optional | Enable additional, less secure ciphers for compatibility. Not recommended unless required. |
target |
Optional | Target directory path on the remote server. Must be a directory. |
source |
Optional | List of files or directories to transfer (local paths). |
rm |
Optional | Remove the target directory on the server before uploading new data. |
debug |
Optional | Enable debug messages for troubleshooting. |
strip_components |
Optional | Remove the specified number of leading path elements when extracting files. |
overwrite |
Optional | Use the --overwrite flag with tar to overwrite existing files. |
tar_dereference |
Optional | Use the --dereference flag with tar to follow symlinks. |
tar_tmp_path |
Optional | Temporary path for the tar file on the destination host. |
tar_exec |
Optional | Path to the tar executable on the destination host. Default: tar. Default: tar |
proxy_host |
Optional | Remote host address for SSH proxy. |
proxy_port |
Optional | SSH proxy port. Default: 22. Default: 22 |
proxy_username |
Optional | Username for SSH proxy authentication. |
proxy_password |
Optional | Password for SSH proxy authentication. |
proxy_passphrase |
Optional | Passphrase for the SSH proxy private key, if required. |
proxy_timeout |
Optional | Timeout for establishing SSH connection to the proxy host. Default: 30s. Default: 30s |
proxy_key |
Optional | Content of the SSH proxy private key (e.g., the raw content of ~/.ssh/id_rsa). |
proxy_key_path |
Optional | Path to the SSH proxy private key file. |
proxy_fingerprint |
Optional | SHA256 fingerprint of the proxy host's public key. If not set, host key verification is skipped (not recommended for production). |
proxy_use_insecure_cipher |
Optional | Enable additional, less secure ciphers for the proxy connection. Not recommended unless required. |
curl_insecure |
Optional | When true, uses the --insecure option with curl for insecure downloads. Default: false |
capture_stdout |
Optional | When true, captures and returns standard output from the commands as action output. Default: false |
version |
Optional | The version of drone-scp to use. |
| Name | Description |
|---|---|
stdout |
Standard output of the executed commands when capture_stdout is enabled. |
name: "SCP Command to Transfer Files"
description: "Easily transfer files and folders using the SCP command in Linux."
author: "Bo-Yi Wu"
inputs:
host:
description: "Remote host address for SCP (e.g., example.com or 192.168.1.1)."
port:
description: "Remote SSH port for SCP. Default: 22."
default: "22"
username:
description: "Username for SSH authentication."
password:
description: "Password for SSH authentication (not recommended; use SSH keys if possible)."
protocol:
description: "IP protocol to use. Valid values: 'tcp', 'tcp4', or 'tcp6'. Default: tcp."
default: "tcp"
timeout:
description: "Timeout for establishing SSH connection to the remote host. Default: 30s."
default: "30s"
command_timeout:
description: "Timeout for the SCP command execution. Default: 10m."
default: "10m"
key:
description: "Content of the SSH private key (e.g., the raw content of ~/.ssh/id_rsa)."
key_path:
description: "Path to the SSH private key file."
passphrase:
description: "Passphrase for the SSH private key, if required."
fingerprint:
description: "SHA256 fingerprint of the host's public key. If not set, host key verification is skipped (not recommended for production)."
use_insecure_cipher:
description: "Enable additional, less secure ciphers for compatibility. Not recommended unless required."
target:
description: "Target directory path on the remote server. Must be a directory."
source:
description: "List of files or directories to transfer (local paths)."
rm:
description: "Remove the target directory on the server before uploading new data."
debug:
description: "Enable debug messages for troubleshooting."
strip_components:
description: "Remove the specified number of leading path elements when extracting files."
overwrite:
description: "Use the --overwrite flag with tar to overwrite existing files."
tar_dereference:
description: "Use the --dereference flag with tar to follow symlinks."
tar_tmp_path:
description: "Temporary path for the tar file on the destination host."
tar_exec:
description: "Path to the tar executable on the destination host. Default: tar."
default: "tar"
proxy_host:
description: "Remote host address for SSH proxy."
proxy_port:
description: "SSH proxy port. Default: 22."
default: "22"
proxy_username:
description: "Username for SSH proxy authentication."
proxy_password:
description: "Password for SSH proxy authentication."
proxy_passphrase:
description: "Passphrase for the SSH proxy private key, if required."
proxy_timeout:
description: "Timeout for establishing SSH connection to the proxy host. Default: 30s."
default: "30s"
proxy_key:
description: "Content of the SSH proxy private key (e.g., the raw content of ~/.ssh/id_rsa)."
proxy_key_path:
description: "Path to the SSH proxy private key file."
proxy_fingerprint:
description: "SHA256 fingerprint of the proxy host's public key. If not set, host key verification is skipped (not recommended for production)."
proxy_use_insecure_cipher:
description: "Enable additional, less secure ciphers for the proxy connection. Not recommended unless required."
curl_insecure:
description: "When true, uses the --insecure option with curl for insecure downloads."
default: "false"
capture_stdout:
description: "When true, captures and returns standard output from the commands as action output."
default: "false"
version:
description: |
The version of drone-scp to use.
outputs:
stdout:
description: "Standard output of the executed commands when capture_stdout is enabled."
value: ${{ steps.entrypoint.outputs.stdout }}
runs:
using: "composite"
steps:
- name: Set GitHub Path
run: echo "$GITHUB_ACTION_PATH" >> $GITHUB_PATH
shell: bash
env:
GITHUB_ACTION_PATH: ${{ github.action_path }}
- id: entrypoint
name: Run entrypoint.sh
run: entrypoint.sh
shell: bash
env:
GITHUB_ACTION_PATH: ${{ github.action_path }}
INPUT_HOST: ${{ inputs.host }}
INPUT_PORT: ${{ inputs.port }}
INPUT_PROTOCOL: ${{ inputs.protocol }}
INPUT_USERNAME: ${{ inputs.username }}
INPUT_PASSWORD: ${{ inputs.password }}
INPUT_PASSPHRASE: ${{ inputs.passphrase }}
INPUT_KEY: ${{ inputs.key }}
INPUT_KEY_PATH: ${{ inputs.key_path }}
INPUT_FINGERPRINT: ${{ inputs.fingerprint }}
INPUT_PROXY_HOST: ${{ inputs.proxy_host }}
INPUT_PROXY_PORT: ${{ inputs.proxy_port }}
INPUT_PROXY_USERNAME: ${{ inputs.proxy_username }}
INPUT_PROXY_PASSWORD: ${{ inputs.proxy_password }}
INPUT_PROXY_PASSPHRASE: ${{ inputs.proxy_passphrase }}
INPUT_PROXY_KEY: ${{ inputs.proxy_key }}
INPUT_PROXY_KEY_PATH: ${{ inputs.proxy_key_path }}
INPUT_PROXY_FINGERPRINT: ${{ inputs.proxy_fingerprint }}
INPUT_USE_INSECURE_CIPHER: ${{ inputs.use_insecure_cipher }}
INPUT_CIPHER: ${{ inputs.cipher }}
INPUT_PROXY_USE_INSECURE_CIPHER: ${{ inputs.proxy_use_insecure_cipher }}
INPUT_PROXY_CIPHER: ${{ inputs.proxy_cipher }}
INPUT_DEBUG: ${{ inputs.debug }}
INPUT_TIMEOUT: ${{ inputs.timeout }}
INPUT_COMMAND_TIMEOUT: ${{ inputs.command_timeout }}
INPUT_TARGET: ${{ inputs.target }}
INPUT_SOURCE: ${{ inputs.source }}
INPUT_RM: ${{ inputs.rm }}
INPUT_STRIP_COMPONENTS: ${{ inputs.strip_components }}
INPUT_OVERWRITE: ${{ inputs.overwrite }}
INPUT_TAR_DEREFERENCE: ${{ inputs.tar_dereference }}
INPUT_TAR_TMP_PATH: ${{ inputs.tar_tmp_path }}
INPUT_TAR_EXEC: ${{ inputs.tar_exec }}
INPUT_PROXY_TIMEOUT: ${{ inputs.proxy_timeout }}
INPUT_CAPTURE_STDOUT: ${{ inputs.capture_stdout }}
INPUT_CURL_INSECURE: ${{ inputs.curl_insecure }}
DRONE_SCP_VERSION: ${{ inputs.version }}
branding:
icon: "copy"
color: "gray-dark"
Action ID: marketplace/aws-actions/amazon-ecs-deploy-task-definition
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/amazon-ecs-deploy-task-definition
Registers an Amazon ECS task definition, and deploys it to an ECS service
| Name | Required | Description |
|---|---|---|
task-definition |
Required | The path to the ECS task definition file to register. |
desired-count |
Optional | The number of instantiations of the task to place and keep running in your service. |
service |
Optional | The name of the ECS service to deploy to. If no service is given, the action will not deploy the task, but only register the task definition. |
cluster |
Optional | The name of the ECS service's cluster. Will default to the 'default' cluster. |
wait-for-service-stability |
Optional | Whether to wait for the ECS service to reach stable state after deploying the new task definition. Valid value is "true". Will default to not waiting. |
wait-for-minutes |
Optional | How long to wait for the ECS service to reach stable state, in minutes (default: 30 minutes, max: 6 hours). For CodeDeploy deployments, any wait time configured in the CodeDeploy deployment group will be added to this value. |
codedeploy-appspec |
Optional | The path to the AWS CodeDeploy AppSpec file, if the ECS service uses the CODE_DEPLOY deployment controller. Will default to 'appspec.yaml'. |
codedeploy-application |
Optional | The name of the AWS CodeDeploy application, if the ECS service uses the CODE_DEPLOY deployment controller. Will default to 'AppECS-{cluster}-{service}'. |
codedeploy-deployment-group |
Optional | The name of the AWS CodeDeploy deployment group, if the ECS service uses the CODE_DEPLOY deployment controller. Will default to 'DgpECS-{cluster}-{service}'. |
codedeploy-deployment-description |
Optional | A description of the deployment, if the ECS service uses the CODE_DEPLOY deployment controller. NOTE: This will be truncated to 512 characters if necessary. |
codedeploy-deployment-config |
Optional | The name of the AWS CodeDeploy deployment configuration, if the ECS service uses the CODE_DEPLOY deployment controller. If not specified, the value configured in the deployment group or `CodeDeployDefault.OneAtATime` is used as the default. |
force-new-deployment |
Optional | Whether to force a new deployment of the service. Valid value is "true". Will default to not force a new deployment. |
service-managed-ebs-volume-name |
Optional | The name of the volume, to be manage in the ECS service. This value must match the volume name from the Volume object in the task definition, that was configuredAtLaunch. |
service-managed-ebs-volume |
Optional | A JSON object defining the configuration settings for the EBS Service volume that was ConfiguredAtLaunch. You can configure size, volumeType, IOPS, throughput, snapshot and encryption in ServiceManagedEBSVolumeConfiguration. Currently, the only supported volume type is an Amazon EBS volume. |
run-task |
Optional | A boolean indicating whether to run a stand-alone task in a ECS cluster. Task will run before the service is updated if both are provided. Default value is false . |
run-task-container-overrides |
Optional | A JSON array of container override objects which should applied when running a task outside of a service. Warning: Do not expose this field to untrusted inputs. More details: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html |
run-task-security-groups |
Optional | A comma-separated list of security group IDs to assign to a task when run outside of a service. Will default to none. |
run-task-subnets |
Optional | A comma-separated list of subnet IDs to assign to a task when run outside of a service. Will default to none. |
run-task-assign-public-IP |
Optional | Whether the task's elastic network interface receives a public IP address. The default value is DISABLED but will only be applied if run-task-subnets or run-task-security-groups are also set. |
run-task-capacity-provider-strategy |
Optional | A JSON array of capacity provider strategy items which should applied when running a task outside of a service. Will default to none. |
run-task-launch-type |
Optional | ECS launch type for tasks run outside of a service. Valid values are 'FARGATE' or 'EC2'. Will default to 'FARGATE'. Will only be applied if run-task-capacity-provider-strategy is not set. |
run-task-started-by |
Optional | A name to use for the startedBy tag when running a task outside of a service. Will default to 'GitHub-Actions'. |
run-task-tags |
Optional | A JSON array of tags. |
run-task-managed-ebs-volume-name |
Optional | The name of the volume. This value must match the volume name from the Volume object in the task definition, that was configuredAtLaunch. |
run-task-managed-ebs-volume |
Optional | A JSON object defining the configuration settings for the Amazon EBS task volume that was configuredAtLaunch. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created. |
wait-for-task-stopped |
Optional | Whether to wait for the task to stop when running it outside of a service. Will default to not wait. |
enable-ecs-managed-tags |
Optional | Determines whether to turn on Amazon ECS managed tags 'aws:ecs:serviceName' and 'aws:ecs:clusterName' for the tasks in the service. |
propagate-tags |
Optional | Determines to propagate the tags from the 'SERVICE' to the task. |
keep-null-value-keys |
Optional | A comma-separated list of keys whose empty values (empty string, array, or object) should be preserved in the task definition. By default, empty values are removed. |
| Name | Description |
|---|---|
task-definition-arn |
The ARN of the registered ECS task definition. |
codedeploy-deployment-id |
The deployment ID of the CodeDeploy deployment (if the ECS service uses the CODE_DEPLOY deployment controller). |
run-task-arn |
The ARN(s) of the task(s) that were started by the run-task option. Output is in an array JSON format. |
name: 'Amazon ECS "Deploy Task Definition" Action for GitHub Actions'
description: 'Registers an Amazon ECS task definition, and deploys it to an ECS service'
branding:
icon: 'cloud'
color: 'orange'
inputs:
task-definition:
description: 'The path to the ECS task definition file to register.'
required: true
desired-count:
description: 'The number of instantiations of the task to place and keep running in your service.'
required: false
service:
description: 'The name of the ECS service to deploy to. If no service is given, the action will not deploy the task, but only register the task definition.'
required: false
cluster:
description: "The name of the ECS service's cluster. Will default to the 'default' cluster."
required: false
wait-for-service-stability:
description: 'Whether to wait for the ECS service to reach stable state after deploying the new task definition. Valid value is "true". Will default to not waiting.'
required: false
wait-for-minutes:
description: 'How long to wait for the ECS service to reach stable state, in minutes (default: 30 minutes, max: 6 hours). For CodeDeploy deployments, any wait time configured in the CodeDeploy deployment group will be added to this value.'
required: false
codedeploy-appspec:
description: "The path to the AWS CodeDeploy AppSpec file, if the ECS service uses the CODE_DEPLOY deployment controller. Will default to 'appspec.yaml'."
required: false
codedeploy-application:
description: "The name of the AWS CodeDeploy application, if the ECS service uses the CODE_DEPLOY deployment controller. Will default to 'AppECS-{cluster}-{service}'."
required: false
codedeploy-deployment-group:
description: "The name of the AWS CodeDeploy deployment group, if the ECS service uses the CODE_DEPLOY deployment controller. Will default to 'DgpECS-{cluster}-{service}'."
required: false
codedeploy-deployment-description:
description: "A description of the deployment, if the ECS service uses the CODE_DEPLOY deployment controller. NOTE: This will be truncated to 512 characters if necessary."
required: false
codedeploy-deployment-config:
description: "The name of the AWS CodeDeploy deployment configuration, if the ECS service uses the CODE_DEPLOY deployment controller. If not specified, the value configured in the deployment group or `CodeDeployDefault.OneAtATime` is used as the default."
required: false
force-new-deployment:
description: 'Whether to force a new deployment of the service. Valid value is "true". Will default to not force a new deployment.'
required: false
service-managed-ebs-volume-name:
description: "The name of the volume, to be manage in the ECS service. This value must match the volume name from the Volume object in the task definition, that was configuredAtLaunch."
required: false
service-managed-ebs-volume:
description: "A JSON object defining the configuration settings for the EBS Service volume that was ConfiguredAtLaunch. You can configure size, volumeType, IOPS, throughput, snapshot and encryption in ServiceManagedEBSVolumeConfiguration. Currently, the only supported volume type is an Amazon EBS volume."
required: false
run-task:
description: 'A boolean indicating whether to run a stand-alone task in a ECS cluster. Task will run before the service is updated if both are provided. Default value is false .'
required: false
run-task-container-overrides:
description: 'A JSON array of container override objects which should applied when running a task outside of a service. Warning: Do not expose this field to untrusted inputs. More details: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html'
required: false
run-task-security-groups:
description: 'A comma-separated list of security group IDs to assign to a task when run outside of a service. Will default to none.'
required: false
run-task-subnets:
description: 'A comma-separated list of subnet IDs to assign to a task when run outside of a service. Will default to none.'
required: false
run-task-assign-public-IP:
description: "Whether the task's elastic network interface receives a public IP address. The default value is DISABLED but will only be applied if run-task-subnets or run-task-security-groups are also set."
required: false
run-task-capacity-provider-strategy:
description: 'A JSON array of capacity provider strategy items which should applied when running a task outside of a service. Will default to none.'
required: false
run-task-launch-type:
description: "ECS launch type for tasks run outside of a service. Valid values are 'FARGATE' or 'EC2'. Will default to 'FARGATE'. Will only be applied if run-task-capacity-provider-strategy is not set."
required: false
run-task-started-by:
description: "A name to use for the startedBy tag when running a task outside of a service. Will default to 'GitHub-Actions'."
required: false
run-task-tags:
description: 'A JSON array of tags.'
required: false
run-task-managed-ebs-volume-name:
description: "The name of the volume. This value must match the volume name from the Volume object in the task definition, that was configuredAtLaunch."
required: false
run-task-managed-ebs-volume:
description: "A JSON object defining the configuration settings for the Amazon EBS task volume that was configuredAtLaunch. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created."
required: false
wait-for-task-stopped:
description: 'Whether to wait for the task to stop when running it outside of a service. Will default to not wait.'
required: false
enable-ecs-managed-tags:
description: "Determines whether to turn on Amazon ECS managed tags 'aws:ecs:serviceName' and 'aws:ecs:clusterName' for the tasks in the service."
required: false
propagate-tags:
description: "Determines to propagate the tags from the 'SERVICE' to the task."
required: false
keep-null-value-keys:
description: 'A comma-separated list of keys whose empty values (empty string, array, or object) should be preserved in the task definition. By default, empty values are removed.'
required: false
outputs:
task-definition-arn:
description: 'The ARN of the registered ECS task definition.'
codedeploy-deployment-id:
description: 'The deployment ID of the CodeDeploy deployment (if the ECS service uses the CODE_DEPLOY deployment controller).'
run-task-arn:
description: 'The ARN(s) of the task(s) that were started by the run-task option. Output is in an array JSON format.'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/google-github-actions/deploy-cloudrun
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/deploy-cloudrun
Use this action to deploy a container or source code to Google Cloud Run.
| Name | Required | Description |
|---|---|---|
service |
Optional | ID of the service or fully-qualified identifier of the service. This is required unless providing `metadata` or `job`. |
job |
Optional | ID of the job or fully-qualified identifier of the job. This is required unless providing `metadata` or `service`. |
metadata |
Optional | YAML service description for the Cloud Run service. This is required unless providing `service` or `job`. |
image |
Optional | (Required, unless providing `metadata` or `source`) Fully-qualified name of the container image to deploy. For example: us-docker.pkg.dev/cloudrun/container/hello:latest or us-docker.pkg.dev/my-project/my-container/image:1.2.3 |
source |
Optional | (Required, unless providing `metadata`, `image`, or `job`) Path to source to deploy. If specified, this will deploy the Cloud Run service from the code specified at the given source directory. Learn more about the required permissions in [Deploying from source code](https://cloud.google.com/run/docs/deploying-source-code). |
suffix |
Optional | String suffix to append to the revision name. Revision names always start with the service name automatically. For example, specifying `v1` for a service named `helloworld`, would lead to a revision named `helloworld-v1`. This option only applies to services. |
env_vars |
Optional | List of environment variables that should be set in the environment. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml env_vars: |- FRUIT=apple SENTENCE=" this will retain leading and trailing spaces " ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. If both `env_vars` and `env_vars_file` are specified, the keys in `env_vars` will take precedence over the keys in `env_vars_file`. |
env_vars_update_strategy |
Required | Controls how the environment variables are set on the Cloud Run service.
If set to "merge", then the environment variables are _merged_ with any
upstream values. If set to "overwrite", then all environment variables on
the Cloud Run service will be replaced with exactly the values given by
the GitHub Action (making it authoritative). Default: merge |
secrets |
Optional | List of KEY=VALUE pairs to use as secrets. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. These can either be injected as environment variables or mounted as volumes. Keys starting with a forward slash '/' are mount paths. All other keys correspond to environment variables: ```yaml with: secrets: |- # As an environment variable: KEY1=secret-key-1:latest # As a volume mount: /secrets/api/key=secret-key-2:latest ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. |
secrets_update_strategy |
Required | Controls how the secrets are set on the Cloud Run service. If set to
`merge`, then the secrets are merged with any upstream values. If set to
`overwrite`, then all secrets on the Cloud Run service will be replaced
with exactly the values given by the GitHub Action (making it
authoritative). Default: merge |
labels |
Optional | List of labels that should be set on the function. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml labels: |- labela=my-label labelb=my-other-label ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. Google Cloud restricts the allowed values and length for labels. Please see the Google Cloud documentation for labels for more information. |
skip_default_labels |
Optional | Skip applying the special annotation labels that indicate the deployment
came from GitHub Actions. The GitHub Action will automatically apply the
following labels which Cloud Run uses to enhance the user experience:
managed-by: github-actions
commit-sha: <sha>
Setting this to `true` will skip adding these special labels. Default: false |
tag |
Optional | Traffic tag to assign to the newly-created revision. This option only applies to services. |
timeout |
Optional | Maximum request execution time, specified as a duration like "10m5s" for ten minutes and 5 seconds. |
flags |
Optional | Space separate list of additional Cloud Run flags to pass to the deploy command. This can be used to apply advanced features that are not exposed via this GitHub Action. For Cloud Run services, this command will be `gcloud run deploy`. For Cloud Run jobs, this command will be `gcloud jobs deploy`. ```yaml with: flags: '--add-cloudsql-instances=...' ``` Flags that include other flags must quote the _entire_ outer flag value. For example, to pass `--args=-X=123`: ```yaml with: flags: '--add-cloudsql-instances=... "--args=-X=123"' ``` See the [complete list of flags](https://cloud.google.com/sdk/gcloud/reference/run/deploy#FLAGS) for more information. Please note, this GitHub Action does not parse or validate the flags. You are responsible for making sure the flags are available on the gcloud version and subcommand. |
no_traffic |
Optional | If true, the newly deployed revision will not receive traffic. This option
only applies to services. Default: false |
wait |
Optional | If true, the action will execute and wait for the job to complete before
exiting. This option only applies to jobs. Default: false |
revision_traffic |
Optional | Comma-separated list of revision traffic assignments. ```yaml with: revision_traffic: 'my-revision=10' # percentage ``` To update traffic to the latest revision, use the special tag "LATEST": ```yaml with: revision_traffic: 'LATEST=100' ``` This is mutually-exclusive with `tag_traffic`. This option only applies to services. |
tag_traffic |
Optional | Comma-separated list of tag traffic assignments. ```yaml with: tag_traffic: 'my-tag=10' # percentage ``` This is mutually-exclusive with `revision_traffic`. This option only applies to services. |
update_traffic_flags |
Optional | Space separate list of additional Cloud Run flags to pass to the `gcloud run services update-traffic` command. This can be used to apply advanced features that are not exposed via this GitHub Action. This flag only applies when `revision_traffic` or `tag_traffic` is set. ```yaml with: traffic_flags: '--set-tags=...' ``` Flags that include other flags must quote the _entire_ outer flag value. For example, to pass `--args=-X=123`: ```yaml with: flags: '--set-tags=... "--args=-X=123"' ``` See the [complete list of flags](https://cloud.google.com/sdk/gcloud/reference/run/services/update#FLAGS) for more information. Please note, this GitHub Action does not parse or validate the flags. You are responsible for making sure the flags are available on the gcloud version and subcommand. |
project_id |
Optional | ID of the Google Cloud project in which to deploy the service. |
region |
Optional | Region in which the Cloud Run services are deployed. Default: us-central1 |
gcloud_version |
Optional | Version of the Cloud SDK to install. If unspecified or set to "latest", the latest available gcloud SDK version for the target platform will be installed. Example: "290.0.1". |
gcloud_component |
Optional | Version of the Cloud SDK components to install and use. |
| Name | Description |
|---|---|
url |
The URL of the Cloud Run service. |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Deploy to Cloud Run'
author: 'Google LLC'
description: |-
Use this action to deploy a container or source code to Google Cloud Run.
inputs:
service:
description: |-
ID of the service or fully-qualified identifier of the service. This is
required unless providing `metadata` or `job`.
required: false
job:
description: |-
ID of the job or fully-qualified identifier of the job. This is required
unless providing `metadata` or `service`.
required: false
metadata:
description: |-
YAML service description for the Cloud Run service. This is required
unless providing `service` or `job`.
required: false
image:
description: |-
(Required, unless providing `metadata` or `source`) Fully-qualified name
of the container image to deploy. For example:
us-docker.pkg.dev/cloudrun/container/hello:latest
or
us-docker.pkg.dev/my-project/my-container/image:1.2.3
required: false
source:
description: |-
(Required, unless providing `metadata`, `image`, or `job`) Path to source
to deploy. If specified, this will deploy the Cloud Run service from the
code specified at the given source directory.
Learn more about the required permissions in [Deploying from source
code](https://cloud.google.com/run/docs/deploying-source-code).
required: false
suffix:
description: |-
String suffix to append to the revision name. Revision names always start
with the service name automatically. For example, specifying `v1` for a
service named `helloworld`, would lead to a revision named
`helloworld-v1`. This option only applies to services.
required: false
env_vars:
description: |-
List of environment variables that should be set in the environment.
These are comma-separated or newline-separated `KEY=VALUE`. Keys or values
that contain separators must be escaped with a backslash (e.g. `\,` or
`\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless
values are quoted.
```yaml
env_vars: |-
FRUIT=apple
SENTENCE=" this will retain leading and trailing spaces "
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
If both `env_vars` and `env_vars_file` are specified, the keys in
`env_vars` will take precedence over the keys in `env_vars_file`.
required: false
env_vars_update_strategy:
description: |-
Controls how the environment variables are set on the Cloud Run service.
If set to "merge", then the environment variables are _merged_ with any
upstream values. If set to "overwrite", then all environment variables on
the Cloud Run service will be replaced with exactly the values given by
the GitHub Action (making it authoritative).
default: 'merge'
required: true
secrets:
description: |-
List of KEY=VALUE pairs to use as secrets. These are comma-separated or
newline-separated `KEY=VALUE`. Keys or values that contain separators must
be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any
leading or trailing whitespace is trimmed unless values are quoted.
These can either be injected as environment variables or mounted as
volumes. Keys starting with a forward slash '/' are mount paths. All other
keys correspond to environment variables:
```yaml
with:
secrets: |-
# As an environment variable:
KEY1=secret-key-1:latest
# As a volume mount:
/secrets/api/key=secret-key-2:latest
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
required: false
secrets_update_strategy:
description: |-
Controls how the secrets are set on the Cloud Run service. If set to
`merge`, then the secrets are merged with any upstream values. If set to
`overwrite`, then all secrets on the Cloud Run service will be replaced
with exactly the values given by the GitHub Action (making it
authoritative).
default: 'merge'
required: true
labels:
description: |-
List of labels that should be set on the function. These are
comma-separated or newline-separated `KEY=VALUE`. Keys or values that
contain separators must be escaped with a backslash (e.g. `\,` or `\\n`)
unless quoted. Any leading or trailing whitespace is trimmed unless values
are quoted.
```yaml
labels: |-
labela=my-label
labelb=my-other-label
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
Google Cloud restricts the allowed values and length for labels. Please
see the Google Cloud documentation for labels for more information.
required: false
skip_default_labels:
description: |-
Skip applying the special annotation labels that indicate the deployment
came from GitHub Actions. The GitHub Action will automatically apply the
following labels which Cloud Run uses to enhance the user experience:
managed-by: github-actions
commit-sha: <sha>
Setting this to `true` will skip adding these special labels.
required: false
default: 'false'
tag:
description: |-
Traffic tag to assign to the newly-created revision. This option only
applies to services.
required: false
timeout:
description: |-
Maximum request execution time, specified as a duration like "10m5s" for
ten minutes and 5 seconds.
required: false
flags:
description: |-
Space separate list of additional Cloud Run flags to pass to the deploy
command. This can be used to apply advanced features that are not exposed
via this GitHub Action. For Cloud Run services, this command will be
`gcloud run deploy`. For Cloud Run jobs, this command will be `gcloud jobs
deploy`.
```yaml
with:
flags: '--add-cloudsql-instances=...'
```
Flags that include other flags must quote the _entire_ outer flag value. For
example, to pass `--args=-X=123`:
```yaml
with:
flags: '--add-cloudsql-instances=... "--args=-X=123"'
```
See the [complete list of
flags](https://cloud.google.com/sdk/gcloud/reference/run/deploy#FLAGS) for
more information.
Please note, this GitHub Action does not parse or validate the flags. You
are responsible for making sure the flags are available on the gcloud
version and subcommand.
required: false
no_traffic:
description: |-
If true, the newly deployed revision will not receive traffic. This option
only applies to services.
default: 'false'
required: false
wait:
description: |-
If true, the action will execute and wait for the job to complete before
exiting. This option only applies to jobs.
default: 'false'
required: false
revision_traffic:
description: |-
Comma-separated list of revision traffic assignments.
```yaml
with:
revision_traffic: 'my-revision=10' # percentage
```
To update traffic to the latest revision, use the special tag "LATEST":
```yaml
with:
revision_traffic: 'LATEST=100'
```
This is mutually-exclusive with `tag_traffic`. This option only applies
to services.
required: false
tag_traffic:
description: |-
Comma-separated list of tag traffic assignments.
```yaml
with:
tag_traffic: 'my-tag=10' # percentage
```
This is mutually-exclusive with `revision_traffic`. This option only
applies to services.
required: false
update_traffic_flags:
description: |-
Space separate list of additional Cloud Run flags to pass to the `gcloud
run services update-traffic` command. This can be used to apply advanced
features that are not exposed via this GitHub Action. This flag only
applies when `revision_traffic` or `tag_traffic` is set.
```yaml
with:
traffic_flags: '--set-tags=...'
```
Flags that include other flags must quote the _entire_ outer flag value. For
example, to pass `--args=-X=123`:
```yaml
with:
flags: '--set-tags=... "--args=-X=123"'
```
See the [complete list of
flags](https://cloud.google.com/sdk/gcloud/reference/run/services/update#FLAGS)
for more information.
Please note, this GitHub Action does not parse or validate the flags. You
are responsible for making sure the flags are available on the gcloud
version and subcommand.
required: false
project_id:
description: |-
ID of the Google Cloud project in which to deploy the service.
required: false
region:
description: |-
Region in which the Cloud Run services are deployed.
default: 'us-central1'
required: false
gcloud_version:
description: |-
Version of the Cloud SDK to install. If unspecified or set to "latest",
the latest available gcloud SDK version for the target platform will be
installed. Example: "290.0.1".
required: false
gcloud_component:
description: |-
Version of the Cloud SDK components to install and use.
required: false
outputs:
url:
description: |-
The URL of the Cloud Run service.
branding:
icon: 'chevrons-right'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/aws-actions/aws-devicefarm-browser-testing
Author: AWS Device Farm
Publisher: aws-actions
Repository: github.com/aws-actions/aws-devicefarm-browser-testing
GitHub action for automated browser testing on AWS Device Farm
| Name | Required | Description |
|---|---|---|
mode |
Required | The mode to run the action in. Ther are 2 values supported: Specify `project` mode when you require creation or looking up of an AWS Device Farm Project ARN. Specify `gridurl` mode when you require a Test Grid URL. Specify `artifact` mode when you require retrieval of artifacts from an existing AWS Device Farm Project ARN. |
project-arn |
Required | The ARN of the AWS Device Farm Browser Testing Project. Alternatively supply a name and a project will be created on your behalf |
url-expires-seconds |
Optional | Lifetime, in seconds, of the Test Grid URL. Default: 900 |
artifact-types |
Optional | (Optional) A comma delimited list of Device Farm Artifacts that should be downloaded after the jobs completes. https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-device-farm/Variable/TestGridSessionArtifactCategory/ Note: To download all artifact types set the value to ALL. To download none skip this input. They will be downloaded to a folder. The name of the folder can be found by referencing the output with name artifact-folder. Please use the GitHub Action [upload-artifact](https://github.com/actions/upload-artifact) to store them. |
artifact-folder |
Optional | (optional) The name of the folder that the test artifacts are downloaded into. Default: artifacts |
| Name | Description |
|---|---|
console-url |
The AWS Console URL for the Test Grid Project |
project-arn |
The ARN of the AWS Device Farm Browser Testing Project |
grid-url |
The AWS Device Farm Test Grid URL (only available in gridurl mode) |
grid-url-expires |
The Datetime that the supplied grid-url will expire formatted as YYYY-MM-DDThh:mm:ss.fffZ |
name: 'AWS Device Farm Browser Testing GitHub Action'
author: 'AWS Device Farm'
description: 'GitHub action for automated browser testing on AWS Device Farm'
branding:
icon: 'cloud'
color: 'orange'
inputs:
mode:
description: >-
The mode to run the action in. Ther are 2 values supported:
Specify `project` mode when you require creation or looking up of an AWS Device Farm Project ARN.
Specify `gridurl` mode when you require a Test Grid URL.
Specify `artifact` mode when you require retrieval of artifacts from an existing AWS Device Farm Project ARN.
required: true
project-arn:
description: >-
The ARN of the AWS Device Farm Browser Testing Project. Alternatively supply a name and a project will be created on your behalf
required: true
url-expires-seconds:
description: >-
Lifetime, in seconds, of the Test Grid URL.
required: false
default: 900
artifact-types:
description: >-
(Optional) A comma delimited list of Device Farm Artifacts that should be downloaded after the jobs completes.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-device-farm/Variable/TestGridSessionArtifactCategory/
Note: To download all artifact types set the value to ALL. To download none skip this input.
They will be downloaded to a folder. The name of the folder can be found by referencing the output with name artifact-folder. Please use the GitHub Action [upload-artifact](https://github.com/actions/upload-artifact) to store them.
required: false
default: ''
artifact-folder:
description: >-
(optional) The name of the folder that the test artifacts are downloaded into.
required: false
default: 'artifacts'
outputs:
console-url:
description: 'The AWS Console URL for the Test Grid Project'
project-arn:
description: 'The ARN of the AWS Device Farm Browser Testing Project'
grid-url:
description: 'The AWS Device Farm Test Grid URL (only available in gridurl mode)'
grid-url-expires:
description: 'The Datetime that the supplied grid-url will expire formatted as YYYY-MM-DDThh:mm:ss.fffZ'
runs:
using: 'node16'
main: 'dist/index.js'
Action ID: marketplace/amirisback/doolan-website
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/doolan-website
EASY SIMPLE BOOTSTRAP WEBSTIE, Full and Clear Documentation, Have Empty View :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'DoolanWebsite'
description: 'EASY SIMPLE BOOTSTRAP WEBSTIE, Full and Clear Documentation, Have Empty View :books:'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/use-kubelogin
Author: Unknown
Publisher: azure
Repository: github.com/azure/use-kubelogin
Setup kubelogin in GitHub Actions
| Name | Required | Description |
|---|---|---|
kubelogin-version |
Optional | The version of kubelogin to use. Defaults to the latest version. |
skip-cache |
Optional | Skip cache check? When set to "true", the action will always download the latest version of kubelogin. Default: false |
github-api-base-url |
Optional | The base URL of GitHub API. Defaults to https://api.github.com. |
name: Setup kubelogin
description: Setup kubelogin in GitHub Actions
inputs:
kubelogin-version:
description: 'The version of kubelogin to use. Defaults to the latest version.'
required: false
skip-cache:
description: 'Skip cache check? When set to "true", the action will always download the latest version of kubelogin.'
required: false
default: 'false'
github-api-base-url:
description: 'The base URL of GitHub API. Defaults to https://api.github.com.'
required: false
branding:
color: green
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/azure/aml-registermodel
Author: azure/gh-aml
Publisher: azure
Repository: github.com/azure/aml-registermodel
Register a model in your Azure Machine Learning Workspace with this GitHub Action
| Name | Required | Description |
|---|---|---|
azure_credentials |
Required | Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS |
run_id |
Optional | ID of the run or pipeline run for which a model is to be registered |
experiment_name |
Optional | Experiment name to which the run belongs to |
parameters_file |
Required | JSON file including the parameters for registering the model. Default: registermodel.json |
| Name | Description |
|---|---|
model_name |
Name of the registered model |
model_version |
Version of the registered model |
model_id |
ID of the registered model |
name: "Azure Machine Learning Register Model Action"
description: "Register a model in your Azure Machine Learning Workspace with this GitHub Action"
author: "azure/gh-aml"
inputs:
azure_credentials:
description: "Paste output of `az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth` as value of secret variable: AZURE_CREDENTIALS"
required: true
run_id:
description: "ID of the run or pipeline run for which a model is to be registered"
required: false
experiment_name:
description: "Experiment name to which the run belongs to"
required: false
parameters_file:
description: "JSON file including the parameters for registering the model."
required: true
default: "registermodel.json"
outputs:
model_name:
description: "Name of the registered model"
model_version:
description: "Version of the registered model"
model_id:
description: "ID of the registered model"
branding:
icon: "chevron-up"
color: "blue"
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/sobolevn/restrict-cursing-action
Author: sobolevn
Publisher: sobolevn
Repository: github.com/sobolevn/restrict-cursing-action
Github Action to prevent cursing and bad language in issues and pull requests
| Name | Required | Description |
|---|---|---|
text |
Optional | Text of what to replace offensive comments with Default: I am so sorry! :pray: |
name: 'restrict-cursing-action'
description: 'Github Action to prevent cursing and bad language in issues and pull requests'
author: 'sobolevn'
branding:
icon: 'alert-triangle'
color: 'red'
inputs:
text:
description: 'Text of what to replace offensive comments with'
required: false
default: 'I am so sorry! :pray:'
runs:
using: 'docker'
image: 'Dockerfile'
Action ID: marketplace/azure/arm-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/arm-deploy
Use this GitHub Action task to deploy Azure Resource Manager (ARM) template
| Name | Required | Description |
|---|---|---|
scope |
Required | Provide the scope of the deployment. Valid values are: 'resourcegroup', 'tenant', 'managementgroup', 'subscription' |
subscriptionId |
Optional | Override the Subscription Id set by Azure Login. |
managementGroupId |
Optional | Specify the Id for the Management Group, only required for Management Group Deployments. |
region |
Optional | Provide the target region, only required for tenant, management Group or Subscription deployments. |
resourceGroupName |
Optional | Provide the name of a resource group, only required for resource Group deployments. |
template |
Required | Specify the path or URL to the Azure Resource Manager template. |
deploymentMode |
Optional | Incremental (only add resources to resource group) or Complete (remove extra resources from resource group) or Validate (only validates the template). |
deploymentName |
Optional | Specifies the name of the resource group deployment to create. |
parameters |
Optional | Supply deployment parameter values. |
failOnStdErr |
Optional | Specify whether to fail the action if some data is written to stderr stream of az cli. Valid values are: true, false Default: True |
additionalArguments |
Optional | Specify any additional arguments for the deployment. |
maskedOutputs |
Optional | Specify list of output keys that its value need to be register to github action as secret. (checkout https://github.com/actions/toolkit/issues/184#issuecomment-1198653452 for valid multiline string) |
name: "Deploy Azure Resource Manager (ARM) Template"
description: "Use this GitHub Action task to deploy Azure Resource Manager (ARM) template"
inputs:
scope:
description: "Provide the scope of the deployment. Valid values are: 'resourcegroup', 'tenant', 'managementgroup', 'subscription'"
required: true
subscriptionId:
description: "Override the Subscription Id set by Azure Login."
required: false
managementGroupId:
description: "Specify the Id for the Management Group, only required for Management Group Deployments."
required: false
region:
description: "Provide the target region, only required for tenant, management Group or Subscription deployments."
required: false
resourceGroupName:
description: "Provide the name of a resource group, only required for resource Group deployments."
required: false
template:
description: "Specify the path or URL to the Azure Resource Manager template."
required: true
deploymentMode:
description: "Incremental (only add resources to resource group) or Complete (remove extra resources from resource group) or Validate (only validates the template)."
required: false
deploymentName:
description: "Specifies the name of the resource group deployment to create."
required: false
parameters:
description: "Supply deployment parameter values."
required: false
failOnStdErr:
description: "Specify whether to fail the action if some data is written to stderr stream of az cli. Valid values are: true, false"
required: false
default: true
additionalArguments:
description: "Specify any additional arguments for the deployment."
required: false
maskedOutputs:
description: "Specify list of output keys that its value need to be register to github action as secret. (checkout https://github.com/actions/toolkit/issues/184#issuecomment-1198653452 for valid multiline string)"
required: false
branding:
color: orange
icon: package
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/wangyoucao577/go-release-action
Author: Jay Zhang <wangyoucao577@gmail.com>
Publisher: wangyoucao577
Repository: github.com/wangyoucao577/go-release-action
Build and Release Go Binaries to GitHub Release Assets
| Name | Required | Description |
|---|---|---|
github_token |
Required | GITHUB_TOKEN for uploading releases to Github asserts. |
goos |
Required | GOOS is the running programs operating system target: one of darwin, freebsd, linux, and so on. |
goarch |
Required | GOARCH is the running programs architecture target: one of 386, amd64, arm, s390x, loong64 and so on. |
goamd64 |
Optional | GOAMD64 is the running programs amd64 microarchitecture level: one of v1, v2, v3, v4. |
goarm |
Optional | GOARM is the running programs arm microarchitecture level: ARMv5,ARMv6,ARMv7 |
gomips |
Optional | GOMIPS is the running programs mips microarchitecture level: hardfloat,softfloat |
goversion |
Optional | The `Go` compiler version. |
build_flags |
Optional | Additional arguments to pass the go build command. |
ldflags |
Optional | Values to provide to the -ldflags argument |
project_path |
Optional | Where to run `go build .` Default: . |
binary_name |
Optional | Specify another binary name if do not want to use repository basename |
pre_command |
Optional | Extra command that will be executed before `go build`, may for solving dependency |
build_command |
Optional | The actual command to build binary, typically `go build`. Default: go build |
executable_compression |
Optional | Compression executable binary by some third-party tools. Only `upx` is supported at the moment. |
extra_files |
Optional | Extra files that will be packaged into artifacts either. |
md5sum |
Optional | Publish `.md5` along with artifacts. Default: TRUE |
sha256sum |
Optional | Publish `.sha256` along with artifacts. Default: FALSE |
release_tag |
Optional | Upload binaries to specified release page that indicated by Git tag. |
release_name |
Optional | Upload binaries to specified release page that indicated by release name. |
release_repo |
Optional | Repository to upload the binaries |
overwrite |
Optional | Overwrite asset if it's already exist. Default: FALSE |
asset_name |
Optional | Customize asset name if do not want to use the default format. |
retry |
Optional | How many times retrying if upload fails. Default: 3 |
post_command |
Optional | Extra command that will be executed for teardown work |
compress_assets |
Optional | Compress assets before uploading Default: TRUE |
upload |
Optional | Upload release assets or not Default: TRUE |
multi_binaries |
Optional | Build and package multiple binaries together Default: FALSE |
| Name | Description |
|---|---|
release_asset_dir |
Release file directory provided for use by other workflows |
# action.yml
name: 'Go Release Binaries'
author: 'Jay Zhang <wangyoucao577@gmail.com>'
description: 'Build and Release Go Binaries to GitHub Release Assets'
inputs:
github_token:
description: 'GITHUB_TOKEN for uploading releases to Github asserts.'
required: true
default: ''
goos:
description: 'GOOS is the running programs operating system target: one of darwin, freebsd, linux, and so on.'
required: true
default: ''
goarch:
description: 'GOARCH is the running programs architecture target: one of 386, amd64, arm, s390x, loong64 and so on.'
required: true
default: ''
goamd64:
description: 'GOAMD64 is the running programs amd64 microarchitecture level: one of v1, v2, v3, v4.'
required: false
default: ''
goarm:
description: 'GOARM is the running programs arm microarchitecture level: ARMv5,ARMv6,ARMv7'
required: false
default: ''
gomips:
description: 'GOMIPS is the running programs mips microarchitecture level: hardfloat,softfloat'
required: false
default: ''
goversion:
description: 'The `Go` compiler version.'
required: false
default: ''
build_flags:
description: 'Additional arguments to pass the go build command.'
required: false
default: ''
ldflags:
description: 'Values to provide to the -ldflags argument'
required: false
default: ''
project_path:
description: 'Where to run `go build .`'
required: false
default: '.'
binary_name:
description: 'Specify another binary name if do not want to use repository basename'
required: false
default: ''
pre_command:
description: 'Extra command that will be executed before `go build`, may for solving dependency'
required: false
default: ''
build_command:
description: 'The actual command to build binary, typically `go build`.'
required: false
default: 'go build'
executable_compression:
description: 'Compression executable binary by some third-party tools. Only `upx` is supported at the moment.'
required: false
default: ''
extra_files:
description: 'Extra files that will be packaged into artifacts either.'
required: false
default: ''
md5sum:
description: 'Publish `.md5` along with artifacts.'
required: false
default: 'TRUE'
sha256sum:
description: 'Publish `.sha256` along with artifacts.'
required: false
default: 'FALSE'
release_tag:
description: 'Upload binaries to specified release page that indicated by Git tag.'
required: false
default: ''
release_name:
description: 'Upload binaries to specified release page that indicated by release name.'
required: false
default: ''
release_repo:
description: 'Repository to upload the binaries'
required: false
default: ''
overwrite:
description: "Overwrite asset if it's already exist."
required: false
default: 'FALSE'
asset_name:
description: 'Customize asset name if do not want to use the default format.'
required: false
default: ''
retry:
description: 'How many times retrying if upload fails.'
required: false
default: '3'
post_command:
description: 'Extra command that will be executed for teardown work'
required: false
default: ''
compress_assets:
description: 'Compress assets before uploading'
required: false
default: 'TRUE'
upload:
description: 'Upload release assets or not'
required: false
default: 'TRUE'
multi_binaries:
description: 'Build and package multiple binaries together'
required: false
default: 'FALSE'
outputs:
release_asset_dir:
description: 'Release file directory provided for use by other workflows'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.github_token }}
- ${{ inputs.goos }}
- ${{ inputs.goarch }}
- ${{ inputs.goamd64 }}
- ${{ inputs.goarm }}
- ${{ inputs.gomips }}
- ${{ inputs.goversion }}
- ${{ inputs.build_flags}}
- ${{ inputs.ldflags }}
- ${{ inputs.project_path }}
- ${{ inputs.binary_name }}
- ${{ inputs.pre_command }}
- ${{ inputs.build_command }}
- ${{ inputs.executable_compression }}
- ${{ inputs.extra_files }}
- ${{ inputs.md5sum }}
- ${{ inputs.sha256sum }}
- ${{ inputs.release_tag }}
- ${{ inputs.release_name }}
- ${{ inputs.release_repo }}
- ${{ inputs.overwrite }}
- ${{ inputs.asset_name }}
- ${{ inputs.retry }}
- ${{ inputs.post_command }}
- ${{ inputs.compress_assets }}
- ${{ inputs.upload }}
- ${{ inputs.multi_binaries }}
branding:
icon: 'package'
color: 'blue'
Action ID: marketplace/axel-op/package-java-azure-function
Author: Unknown
Publisher: axel-op
Repository: github.com/axel-op/package-java-azure-function
Package an Azure Function app in Java
| Name | Required | Description |
|---|---|---|
function-name |
Required | The name of the function app to package |
working-directory |
Optional | The root directory of the function Default: . |
| Name | Description |
|---|---|
deployment-directory |
The directory containing the files to deploy |
deployment-file |
The zip file to deploy |
name: "Package an Azure Function app in Java"
description: "Package an Azure Function app in Java"
branding:
icon: package
color: red
inputs:
function-name:
description: "The name of the function app to package"
required: true
working-directory:
description: "The root directory of the function"
required: false
default: "."
outputs:
deployment-directory:
description: "The directory containing the files to deploy"
value: ${{ steps.package.outputs.deployment-directory }}
deployment-file:
description: "The zip file to deploy"
value: ${{ steps.package.outputs.deployment-file }}
runs:
using: "composite"
steps:
- name: Package
id: package
working-directory: ${{ inputs.working-directory }}
run: "${GITHUB_ACTION_PATH}/package.sh"
shell: bash
env:
APP_NAME: ${{ inputs.function-name }}
DEPLOYMENT_DIR: deployment
Action ID: marketplace/yoshi389111/github-profile-3d-contrib
Author: yoshi389111
Publisher: yoshi389111
Repository: github.com/yoshi389111/github-profile-3d-contrib
Generate profile 3D Contributions
name: GitHub-Profile-3D-Contrib
description: Generate profile 3D Contributions
author: yoshi389111
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'box'
color: 'green'
Action ID: marketplace/appleboy/sync-secrets-action
Author: Bo-Yi Wu
Publisher: appleboy
Repository: github.com/appleboy/sync-secrets-action
Copies secrets from the action's environment to many other repos.
| Name | Required | Description |
|---|---|---|
gitea_token |
Required | Token to use to get repos and write secrets |
gitea_server |
Required | The URL of the Gitea server to use. |
repos |
Optional | New line deliminated to select repositories to copy secrets to. |
orgs |
Optional | New line deliminated to select organizations to copy secrets to. |
gitea_skip_verify |
Optional | When set to `true`, the action will skip SSL verification when making requests to the Gitea API. Defaults to `false`. |
secrets |
Required | New line deliminated regex expressions to select values from `process.env`. Use the action env to pass secrets from the repository in which this action runs with the `env` attribute of the step. |
dry_run |
Optional | Run everything except for secret create and update functionality. |
debug |
Optional | Enable debug mode to output detailed logging information for troubleshooting.
Default: false |
environment |
Optional | If this value is set, the action will set the secrets to the repositories environment with the name of this value. Only works if `target` is set to `actions` (default). |
description |
Optional | Optional description for the sync operation to help identify the purpose of this secrets synchronization. |
# action.yml
name: "Gitea Sync Secrets"
author: Bo-Yi Wu
branding:
icon: "copy"
color: "red"
description: "Copies secrets from the action's environment to many other repos."
inputs:
gitea_token:
description: "Token to use to get repos and write secrets"
required: true
gitea_server:
description: |
The URL of the Gitea server to use.
required: true
repos:
description: |
New line deliminated to select repositories to copy secrets to.
required: false
orgs:
description: |
New line deliminated to select organizations to copy secrets to.
required: false
gitea_skip_verify:
description: |
When set to `true`, the action will skip SSL verification when making requests
to the Gitea API. Defaults to `false`.
required: false
secrets:
description: |
New line deliminated regex expressions to select values from `process.env`.
Use the action env to pass secrets from the repository in which this action
runs with the `env` attribute of the step.
required: true
dry_run:
description: |
Run everything except for secret create and update functionality.
required: false
debug:
description: |
Enable debug mode to output detailed logging information for troubleshooting.
required: false
default: "false"
environment:
default: ""
description: |
If this value is set, the action will set the secrets to the repositories environment with the name of this value.
Only works if `target` is set to `actions` (default).
required: false
description:
description: |
Optional description for the sync operation to help identify the purpose of this secrets synchronization.
required: false
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/wei/cloudflare-script
Author: wei
Publisher: wei
Repository: github.com/wei/cloudflare-script
Run inline scripts using the Cloudflare TypeScript SDK
| Name | Required | Description |
|---|---|---|
script |
Required | TypeScript script to execute using Cloudflare SDK |
| Name | Description |
|---|---|
result |
The result of the script execution, if the script returns a value |
name: 'cloudflare-script'
description: 'Run inline scripts using the Cloudflare TypeScript SDK'
author: 'wei'
branding:
icon: 'cloud'
color: 'orange'
inputs:
script:
description: 'TypeScript script to execute using Cloudflare SDK'
required: true
outputs:
result:
description: 'The result of the script execution, if the script returns a value'
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/github/issue-metrics
Author: github
Publisher: github
Repository: github.com/github/issue-metrics
A GitHub Action to report out issue metrics
---
name: "issue-metrics"
author: "github"
description: "A GitHub Action to report out issue metrics"
runs:
using: "docker"
image: "docker://ghcr.io/github/issue_metrics:v3"
branding:
icon: "check-square"
color: "white"
Action ID: marketplace/actions/component-detection-dependency-submission-action
Author: Unknown
Publisher: actions
Repository: github.com/actions/component-detection-dependency-submission-action
Upload information about your dependencies to the GitHub dependency graph using dependency submission API.
| Name | Required | Description |
|---|---|---|
token |
Optional | GitHub Personal Access Token (PAT). Defaults to PAT provided by Actions runner. Default: ${{ github.token }} |
filePath |
Optional | The path to the directory containing the environment files to upload. Defaults to Actions working directory. Default: . |
directoryExclusionList |
Optional | Filters out specific directories following a minimatch pattern. |
detectorArgs |
Optional | Comma separated list of properties that can affect the detectors execution, like EnableIfDefaultOff that allows a specific detector that is in beta to run, the format for this property is DetectorId=EnableIfDefaultOff, for example Pip=EnableIfDefaultOff. |
dockerImagesToScan |
Optional | Comma separated list of docker image names or hashes to execute container scanning on, ex: ubuntu:16.04,56bab49eef2ef07505f6a1b0d5bd3a601dfc3c76ad4460f24c91d6fa298369ab |
detectorsFilter |
Optional | A comma separated list with the identifiers of the specific detectors to be used. This is meant to be used for testing purposes only. |
detectorsCategories |
Optional | A comma separated list with the categories of components that are going to be scanned. The detectors that are going to run are the ones that belongs to the categories. The possible values are: Npm, NuGet, Maven, RubyGems, Cargo, Pip, GoMod, CocoaPods, Linux. |
correlator |
Optional | An optional identifier to distinguish between multiple dependency snapshots of the same type. |
detector-name |
Optional | The name of the detector. If provided, detector-version and detector-url must also be provided. |
detector-version |
Optional | The version of the detector. If provided, detector-name and detector-url must also be provided. |
detector-url |
Optional | The URL of the detector. If provided, detector-name and detector-version must also be provided. |
snapshot-sha |
Optional | The SHA of the commit to associate with the snapshot. If provided, snapshot-ref must also be provided. |
snapshot-ref |
Optional | The Git reference to associate with the snapshot. If provided, snapshot-sha must also be provided. |
name: 'Component Detection dependency submission action'
description: 'Upload information about your dependencies to the GitHub dependency graph using dependency submission API. '
inputs:
token:
description: "GitHub Personal Access Token (PAT). Defaults to PAT provided by Actions runner."
required: false
default: ${{ github.token }}
filePath:
description: 'The path to the directory containing the environment files to upload. Defaults to Actions working directory.'
required: false
default: '.'
directoryExclusionList:
description: 'Filters out specific directories following a minimatch pattern.'
required: false
detectorArgs:
description: 'Comma separated list of properties that can affect the detectors execution, like EnableIfDefaultOff that allows a specific detector that is in beta to run, the format for this property is DetectorId=EnableIfDefaultOff, for example Pip=EnableIfDefaultOff.'
required: false
dockerImagesToScan:
description: 'Comma separated list of docker image names or hashes to execute container scanning on, ex: ubuntu:16.04,56bab49eef2ef07505f6a1b0d5bd3a601dfc3c76ad4460f24c91d6fa298369ab'
required: false
detectorsFilter:
description: 'A comma separated list with the identifiers of the specific detectors to be used. This is meant to be used for testing purposes only.'
required: false
detectorsCategories:
description: 'A comma separated list with the categories of components that are going to be scanned. The detectors that are going to run are the ones that belongs to the categories. The possible values are: Npm, NuGet, Maven, RubyGems, Cargo, Pip, GoMod, CocoaPods, Linux.'
required: false
correlator:
description: 'An optional identifier to distinguish between multiple dependency snapshots of the same type.'
required: false
detector-name:
description: 'The name of the detector. If provided, detector-version and detector-url must also be provided.'
required: false
detector-version:
description: 'The version of the detector. If provided, detector-name and detector-url must also be provided.'
required: false
detector-url:
description: 'The URL of the detector. If provided, detector-name and detector-version must also be provided.'
required: false
snapshot-sha:
description: 'The SHA of the commit to associate with the snapshot. If provided, snapshot-ref must also be provided.'
required: false
snapshot-ref:
description: 'The Git reference to associate with the snapshot. If provided, snapshot-sha must also be provided.'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'upload-cloud'
color: 'blue'
Action ID: marketplace/mxschmitt/free-disk-space
Author: Unknown
Publisher: mxschmitt
Repository: github.com/mxschmitt/free-disk-space
A configurable GitHub Action to free up disk space on an Ubuntu GitHub Actions runner.
| Name | Required | Description |
|---|---|---|
android |
Optional | Remove Android runtime Default: true |
dotnet |
Optional | Remove .NET runtime Default: true |
haskell |
Optional | Remove Haskell runtime Default: true |
large-packages |
Optional | Remove large packages Default: true |
docker-images |
Optional | Remove Docker images Default: true |
tool-cache |
Optional | Remove image tool cache Default: false |
swap-storage |
Optional | Remove swap storage Default: true |
name: "Free Disk Space (Ubuntu)"
description: "A configurable GitHub Action to free up disk space on an Ubuntu GitHub Actions runner."
# See: https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#branding
branding:
icon: "trash-2"
color: "green"
inputs:
android:
description: "Remove Android runtime"
required: false
default: "true"
dotnet:
description: "Remove .NET runtime"
required: false
default: "true"
haskell:
description: "Remove Haskell runtime"
required: false
default: "true"
# option inspired by:
# https://github.com/apache/flink/blob/master/tools/azure-pipelines/free_disk_space.sh
large-packages:
description: "Remove large packages"
required: false
default: "true"
docker-images:
description: "Remove Docker images"
required: false
default: "true"
# option inspired by:
# https://github.com/actions/virtual-environments/issues/2875#issuecomment-1163392159
tool-cache:
description: "Remove image tool cache"
required: false
default: "false"
swap-storage:
description: "Remove swap storage"
required: false
default: "true"
runs:
using: "composite"
steps:
- shell: bash
run: |
# ======
# MACROS
# ======
# macro to print a line of equals
# (silly but works)
printSeparationLine() {
str=${1:=}
num=${2:-80}
counter=1
output=""
while [ $counter -le $num ]
do
output="${output}${str}"
counter=$((counter+1))
done
echo "${output}"
}
# macro to compute available space
# REF: https://unix.stackexchange.com/a/42049/60849
# REF: https://stackoverflow.com/a/450821/408734
getAvailableSpace() { echo $(df -a $1 | awk 'NR > 1 {avail+=$4} END {print avail}'); }
# macro to make Kb human readable (assume the input is Kb)
# REF: https://unix.stackexchange.com/a/44087/60849
formatByteCount() { echo $(numfmt --to=iec-i --suffix=B --padding=7 $1'000'); }
# macro to output saved space
printSavedSpace() {
saved=${1}
title=${2:-}
echo ""
printSeparationLine '*' 80
if [ ! -z "${title}" ]; then
echo "=> ${title}: Saved $(formatByteCount $saved)"
else
echo "=> Saved $(formatByteCount $saved)"
fi
printSeparationLine '*' 80
echo ""
}
# macro to print output of dh with caption
printDH() {
caption=${1:-}
printSeparationLine '=' 80
echo "${caption}"
echo ""
echo "$ dh -h /"
echo ""
df -h /
echo "$ dh -a /"
echo ""
df -a /
echo "$ dh -a"
echo ""
df -a
printSeparationLine '=' 80
}
# ======
# SCRIPT
# ======
# Display initial disk space stats
AVAILABLE_INITIAL=$(getAvailableSpace)
AVAILABLE_ROOT_INITIAL=$(getAvailableSpace '/')
printDH "BEFORE CLEAN-UP:"
echo ""
# Option: Remove Android library
if [[ ${{ inputs.android }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
sudo rm -rf /usr/local/lib/android || true
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED "Android library"
fi
# Option: Remove .NET runtime
if [[ ${{ inputs.dotnet }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
# https://github.community/t/bigger-github-hosted-runners-disk-space/17267/11
sudo rm -rf /usr/share/dotnet || true
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED ".NET runtime"
fi
# Option: Remove Haskell runtime
if [[ ${{ inputs.haskell }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
sudo rm -rf /opt/ghc || true
sudo rm -rf /usr/local/.ghcup || true
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED "Haskell runtime"
fi
# Option: Remove large packages
# REF: https://github.com/apache/flink/blob/master/tools/azure-pipelines/free_disk_space.sh
if [[ ${{ inputs.large-packages }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
sudo apt-get remove -y '^aspnetcore-.*' || echo "::warning::The command [sudo apt-get remove -y '^aspnetcore-.*'] failed to complete successfully. Proceeding..."
sudo apt-get remove -y '^dotnet-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^dotnet-.*' --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y '^llvm-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^llvm-.*' --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y 'php.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y 'php.*' --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y '^mongodb-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mongodb-.*' --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y '^mysql-.*' --fix-missing || echo "::warning::The command [sudo apt-get remove -y '^mysql-.*' --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing || echo "::warning::The command [sudo apt-get remove -y azure-cli google-chrome-stable firefox powershell mono-devel libgl1-mesa-dri --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y google-cloud-sdk --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-sdk --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get remove -y google-cloud-cli --fix-missing || echo "::debug::The command [sudo apt-get remove -y google-cloud-cli --fix-missing] failed to complete successfully. Proceeding..."
sudo apt-get autoremove -y || echo "::warning::The command [sudo apt-get autoremove -y] failed to complete successfully. Proceeding..."
sudo apt-get clean || echo "::warning::The command [sudo apt-get clean] failed to complete successfully. Proceeding..."
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED "Large misc. packages"
fi
# Option: Remove Docker images
if [[ ${{ inputs.docker-images }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
sudo docker image prune --all --force || true
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED "Docker images"
fi
# Option: Remove tool cache
# REF: https://github.com/actions/virtual-environments/issues/2875#issuecomment-1163392159
if [[ ${{ inputs.tool-cache }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
sudo rm -rf "$AGENT_TOOLSDIRECTORY" || true
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED "Tool cache"
fi
# Option: Remove Swap storage
if [[ ${{ inputs.swap-storage }} == 'true' ]]; then
BEFORE=$(getAvailableSpace)
sudo swapoff -a || true
sudo rm -f /mnt/swapfile || true
free -h
AFTER=$(getAvailableSpace)
SAVED=$((AFTER-BEFORE))
printSavedSpace $SAVED "Swap storage"
fi
# Output saved space statistic
AVAILABLE_END=$(getAvailableSpace)
AVAILABLE_ROOT_END=$(getAvailableSpace '/')
echo ""
printDH "AFTER CLEAN-UP:"
echo ""
echo ""
echo "/dev/root:"
printSavedSpace $((AVAILABLE_ROOT_END - AVAILABLE_ROOT_INITIAL))
echo "overall:"
printSavedSpace $((AVAILABLE_END - AVAILABLE_INITIAL))
Action ID: marketplace/mheap/setup-go-cli
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/setup-go-cli
Install any goreleaser based CLI in GitHub Actions
| Name | Required | Description |
|---|---|---|
owner |
Required | The user/org name to fetch the tool from |
repo |
Required | The repository name to fetch the tool from |
cli_name |
Required | The name of the CLI (may be different to repo) |
version |
Optional | The version to install |
package_name_template |
Optional | Allows you to customise the package name using a template |
package_type |
Optional | The package archive type (.tar.gz, .zip) |
token |
Optional | The GitHub token to use when fetching the latest version of the tool Default: ${{ github.token }} |
name: Setup Go CLI
description: Install any goreleaser based CLI in GitHub Actions
runs:
using: node20
main: dist/index.js
inputs:
owner:
description: The user/org name to fetch the tool from
required: true
repo:
description: The repository name to fetch the tool from
required: true
cli_name:
description: The name of the CLI (may be different to repo)
required: true
version:
description: The version to install
required: false
package_name_template:
description: Allows you to customise the package name using a template
required: false
package_type:
description: The package archive type (.tar.gz, .zip)
required: false
token:
description: The GitHub token to use when fetching the latest version of the tool
default: ${{ github.token }}
required: false
Action ID: marketplace/azure/aks-create-action
Author: Unknown
Publisher: azure
Repository: github.com/azure/aks-create-action
Create an AKS Cluster
| Name | Required | Description |
|---|---|---|
ARM_CLIENT_ID |
Required | Client ID for Service Principal |
ARM_CLIENT_SECRET |
Required | Client Secret for Service Principal |
ARM_SUBSCRIPTION_ID |
Required | Subscription for Service Principal |
ARM_TENANT_ID |
Required | Tenant for Service Principal |
RESOURCE_GROUP_NAME |
Required | Resource Group for cluster |
CLUSTER_NAME |
Required | Name for cluster |
CLUSTER_SIZE |
Optional | Size of cluster (dev or test) Default: dev |
STORAGE_ACCOUNT_NAME |
Required | Name for Storage Account |
STORAGE_CONTAINER_NAME |
Required | Name for Storage Container |
STORAGE_ACCESS_KEY |
Required | Access Key for Storage Container |
ACTION_TYPE |
Optional | create or destroy Default: create |
IAC_TYPE |
Optional | IAC tool, Possible choices: [ pulumi, terraform ] Default: terraform |
CREATE_ACR |
Optional | create ACR with cluster name Default: false |
PULUMI_ACCESS_TOKEN |
Required | Token that allows this action to access Pulumi |
# action.yml
name: 'Create AKS Cluster'
description: 'Create an AKS Cluster'
branding:
icon: anchor
color: blue
inputs:
ARM_CLIENT_ID: # id of input
description: 'Client ID for Service Principal'
required: true
ARM_CLIENT_SECRET: # id of input
description: 'Client Secret for Service Principal'
required: true
ARM_SUBSCRIPTION_ID: # id of input
description: 'Subscription for Service Principal'
required: true
ARM_TENANT_ID: # id of input
description: 'Tenant for Service Principal'
required: true
RESOURCE_GROUP_NAME: # id of input
description: 'Resource Group for cluster'
required: true
CLUSTER_NAME:
description: 'Name for cluster'
required: true
CLUSTER_SIZE:
description: 'Size of cluster (dev or test)'
required: false
default: 'dev'
STORAGE_ACCOUNT_NAME: # id of input
description: 'Name for Storage Account'
required: true
STORAGE_CONTAINER_NAME:
description: 'Name for Storage Container'
required: true
STORAGE_ACCESS_KEY:
description: 'Access Key for Storage Container'
required: true
ACTION_TYPE:
description: 'create or destroy'
required: false
default: 'create'
IAC_TYPE:
description: 'IAC tool, Possible choices: [ pulumi, terraform ]'
required: false
default: 'terraform'
CREATE_ACR:
description: 'create ACR with cluster name'
default: 'false'
PULUMI_ACCESS_TOKEN:
description: 'Token that allows this action to access Pulumi'
required: 'false'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.ARM_CLIENT_ID }}
- ${{ inputs.ARM_CLIENT_SECRET }}
- ${{ inputs.ARM_SUBSCRIPTION_ID }}
- ${{ inputs.ARM_TENANT_ID }}
- ${{ inputs.RESOURCE_GROUP_NAME }}
- ${{ inputs.CLUSTER_NAME }}
- ${{ inputs.CLUSTER_SIZE }}
- ${{ inputs.STORAGE_ACCOUNT_NAME }}
- ${{ inputs.STORAGE_CONTAINER_NAME }}
- ${{ inputs.STORAGE_ACCESS_KEY }}
- ${{ inputs.ACTION_TYPE }}
- ${{ inputs.CREATE_ACR }}
- ${{ inputs.IAC_TYPE }}
- ${{ inputs.PULUMI_ACCESS_TOKEN }}
Action ID: marketplace/azure/online-experimentation-deploy-metrics
Author: Microsoft
Publisher: azure
Repository: github.com/azure/online-experimentation-deploy-metrics
Deploy metric definitions from a configuration file to the Online Experimentation workspace
| Name | Required | Description |
|---|---|---|
path |
Required | Source file(s) containing metrics. Multiple files can be specified in list or glob format. Example: **/*.json or metrics/*.json. Use ! to exclude files. |
online-experimentation-workspace-endpoint |
Required | Online Experimentation workspace endpoint |
operation |
Optional | Possible values: validate or deploy - deploy by default. validate: only validates the configuration file. deploy: syncs the configuration file with Online Experimentation workspace Default: deploy |
strict |
Optional | If strict, the deploy operation deletes metrics not found in the config file. Choices: true, false (default: true). Default: true |
add-commit-hash-to-metric-description |
Optional | Indicates whether to add the git commit hash to the metric description. This is helpful to track the metric version used by experiment analysis. Choices: true, false (default: false) Default: false |
name: 'Online Experimentation Deploy Metrics'
description:
'Deploy metric definitions from a configuration file to the Online
Experimentation workspace'
author: 'Microsoft'
branding:
icon: 'heart'
color: 'red'
# Define your inputs here.
inputs:
path:
description:
'Source file(s) containing metrics. Multiple files can be specified in
list or glob format. Example: **/*.json or metrics/*.json. Use ! to
exclude files.'
required: true
online-experimentation-workspace-endpoint: # Online Experimentation workspace endpoint
description: 'Online Experimentation workspace endpoint'
required: true
operation: # Validate the configuration file only
description:
'Possible values: validate or deploy - deploy by default. validate: only
validates the configuration file. deploy: syncs the configuration file
with Online Experimentation workspace'
required: false
default: 'deploy'
strict:
description:
'If strict, the deploy operation deletes metrics not found in the config
file. Choices: true, false (default: true).'
required: false
default: 'true'
add-commit-hash-to-metric-description: # Add commit hash to metric description
description:
'Indicates whether to add the git commit hash to the metric description.
This is helpful to track the metric version used by experiment analysis.
Choices: true, false (default: false)'
required: false
default: 'false'
runs:
using: node20
main: dist/index.js
Action ID: marketplace/tgymnich/setup-swift
Author: TG908
Publisher: tgymnich
Repository: github.com/tgymnich/setup-swift
Setup a Swift environment
| Name | Required | Description |
|---|---|---|
version |
Required | Swift version to use. Default: 5.1.3 |
name: 'Setup Swift Action'
description: 'Setup a Swift environment'
author: 'TG908'
branding:
icon: 'chevrons-right'
color: 'orange'
inputs:
version:
description: 'Swift version to use.'
required: true
default: '5.1.3'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/azure/aca-review-apps
Author: Unknown
Publisher: azure
Repository: github.com/azure/aca-review-apps
GitHub Action to create a new Container App revision for PR review. github.com/Azure/Actions
| Name | Required | Description |
|---|---|---|
resource-group |
Required | Name of the Resource Group in which the Container App will be created |
name |
Required | Name of the Container App |
image |
Required | Name of Container Image. If the `mode` input is `deactivate-revision`, this input is ignored. |
revision-name-suffix |
Required | Suffixes for New Revision or Target Revision of deactivation. If the `mode` input is `deactivate-revision`, this input needs to be set. |
deactivate-revision-mode |
Optional | Enable Deactivation-Revision Mode of this action. `true` or `false`. Default value is `false` |
| Name | Description |
|---|---|
app-url |
Deployed App URL |
name: 'Create Azure Container App Review Revisions'
description: 'GitHub Action to create a new Container App revision for PR review. github.com/Azure/Actions'
inputs:
resource-group:
description: 'Name of the Resource Group in which the Container App will be created'
required: true
name:
description: 'Name of the Container App'
required: true
image:
description: Name of Container Image. If the `mode` input is `deactivate-revision`, this input is ignored.
required: true
revision-name-suffix:
description: Suffixes for New Revision or Target Revision of deactivation. If the `mode` input is `deactivate-revision`, this input needs to be set.
required: true
deactivate-revision-mode:
description: Enable Deactivation-Revision Mode of this action. `true` or `false`. Default value is `false`
required: false
outputs:
app-url:
description: 'Deployed App URL'
branding:
icon: 'check-circle'
color: 'blue'
runs:
using: 'node16'
main: 'lib/main.js'
Action ID: marketplace/actions-hub/kubectl
Author: Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>
Publisher: actions-hub
Repository: github.com/actions-hub/kubectl
GitHub Action for interacting with kubectl (k8s)
| Name | Required | Description |
|---|---|---|
args |
Optional | Arguments for the CLI command |
redirect-to |
Optional | Variable name to redirect CLI output |
name: 'Kuberentes (k8s) cli - kubectl'
description: 'GitHub Action for interacting with kubectl (k8s)'
author: 'Serhiy Mytrovtsiy <mitrovtsiy@ukr.net>'
branding:
icon: 'terminal'
color: 'blue'
inputs:
args:
description: 'Arguments for the CLI command'
required: false
redirect-to:
description: 'Variable name to redirect CLI output'
required: false
runs:
using: 'docker'
image: 'Dockerfile'
env:
dest: ${{ inputs.redirect-to }}
Action ID: marketplace/github/accessibility-scanner
Author: Unknown
Publisher: github
Repository: github.com/github/accessibility-scanner
Finds potential accessibility gaps, files GitHub issues to track them, and attempts to fix them with Copilot.
| Name | Required | Description |
|---|---|---|
urls |
Required | Newline-delimited list of URLs to check for accessibility issues |
repository |
Required | Repository (with owner) to file issues in |
token |
Required | Personal access token (PAT) with fine-grained permissions 'contents: write', 'issues: write', and 'pull_requests: write' |
cache_key |
Required | Key for caching results across runs |
login_url |
Optional | If scanned pages require authentication, the URL of the login page |
username |
Optional | If scanned pages require authentication, the username to use for login |
password |
Optional | If scanned pages require authentication, the password to use for login |
auth_context |
Optional | If scanned pages require authentication, a stringified JSON object containing 'username', 'password', 'cookies', and/or 'localStorage' from an authenticated session |
skip_copilot_assignment |
Optional | Whether to skip assigning filed issues to Copilot Default: false |
| Name | Description |
|---|---|
results |
List of issues and pull requests filed (and their associated finding(s)), as stringified JSON |
name: "accessibility-scanner"
description: "Finds potential accessibility gaps, files GitHub issues to track them, and attempts to fix them with Copilot."
inputs:
urls:
description: "Newline-delimited list of URLs to check for accessibility issues"
required: true
multiline: true
repository:
description: "Repository (with owner) to file issues in"
required: true
token:
description: "Personal access token (PAT) with fine-grained permissions 'contents: write', 'issues: write', and 'pull_requests: write'"
required: true
cache_key:
description: "Key for caching results across runs"
required: true
login_url:
description: "If scanned pages require authentication, the URL of the login page"
required: false
username:
description: "If scanned pages require authentication, the username to use for login"
required: false
password:
description: "If scanned pages require authentication, the password to use for login"
required: false
auth_context:
description: "If scanned pages require authentication, a stringified JSON object containing 'username', 'password', 'cookies', and/or 'localStorage' from an authenticated session"
required: false
skip_copilot_assignment:
description: "Whether to skip assigning filed issues to Copilot"
required: false
default: "false"
outputs:
results:
description: "List of issues and pull requests filed (and their associated finding(s)), as stringified JSON"
value: ${{ steps.results.outputs.results }}
runs:
using: "composite"
steps:
- name: Make sub-actions referenceable
working-directory: ${{ github.action_path }}
shell: bash
run: |
RUNNER_WORK_DIR="$(realpath "${GITHUB_WORKSPACE}/../..")"
ACTION_DIR="${RUNNER_WORK_DIR}/_actions/github/accessibility-scanner/current"
mkdir -p "${ACTION_DIR}/.github/actions"
if [ "$(realpath ".github/actions")" != "$(realpath "${ACTION_DIR}/.github/actions")" ]; then
cp -a ".github/actions/." "${ACTION_DIR}/.github/actions/"
fi
- name: Restore cached results
id: restore
uses: ./../../_actions/github/accessibility-scanner/current/.github/actions/gh-cache/cache
with:
key: ${{ inputs.cache_key }}
token: ${{ inputs.token }}
- if: ${{ steps.restore.outputs.value }}
name: Normalize cache format
id: normalize_cache
shell: bash
run: |
# Migrate cache format from v1 to v2:
# If cached data is a list of Finding objects, each with 'issueUrl' keys (i.e. v1),
# convert to a list of (partial) Result objects, each with 'findings' and 'issue' keys (i.e. v2).
# Otherwise, re-output as-is.
printf '%s' "value=$(printf '%s' '${{ steps.restore.outputs.value }}' | jq -c 'if (type == "array" and length > 0 and (.[0] | has("issueUrl"))) then map({findings: [del(.issueUrl)], issue: {url: .issueUrl}}) else . end' )" >> $GITHUB_OUTPUT
- if: ${{ inputs.login_url && inputs.username && inputs.password && !inputs.auth_context }}
name: Authenticate
id: auth
uses: ./../../_actions/github/accessibility-scanner/current/.github/actions/auth
with:
login_url: ${{ inputs.login_url }}
username: ${{ inputs.username }}
password: ${{ inputs.password }}
- name: Find
id: find
uses: ./../../_actions/github/accessibility-scanner/current/.github/actions/find
with:
urls: ${{ inputs.urls }}
auth_context: ${{ inputs.auth_context || steps.auth.outputs.auth_context }}
- name: File
id: file
uses: ./../../_actions/github/accessibility-scanner/current/.github/actions/file
with:
findings: ${{ steps.find.outputs.findings }}
repository: ${{ inputs.repository }}
token: ${{ inputs.token }}
cached_filings: ${{ steps.normalize_cache.outputs.value }}
- if: ${{ steps.file.outputs.filings }}
name: Get issues from filings
id: get_issues_from_filings
shell: bash
run: |
# Extract open issues from Filing objects and output as a single-line JSON array
issues=$(jq -c '[.[] | select(.issue.state == "open") | .issue]' <<< '${{ steps.file.outputs.filings }}')
echo "issues=$issues" >> "$GITHUB_OUTPUT"
- if: ${{ inputs.skip_copilot_assignment != 'true' }}
name: Fix
id: fix
uses: ./../../_actions/github/accessibility-scanner/current/.github/actions/fix
with:
issues: ${{ steps.get_issues_from_filings.outputs.issues }}
repository: ${{ inputs.repository }}
token: ${{ inputs.token }}
- name: Set results output
id: results
uses: actions/github-script@v8
with:
script: |
const filings = ${{ steps.file.outputs.filings || '""' }} || [];
const fixings = ${{ steps.fix.outputs.fixings || '""' }} || [];
const fixingsByIssueUrl = fixings.reduce((acc, fixing) => {
if (fixing.issue && fixing.issue.url) {
acc[fixing.issue.url] = fixing;
}
return acc;
}, {});
const results = filings;
for (const result of results) {
if (result.issue && result.issue.url && fixingsByIssueUrl[result.issue.url]) {
result.pullRequest = fixingsByIssueUrl[result.issue.url].pullRequest;
}
}
core.setOutput('results', JSON.stringify(results));
core.debug(`Results: ${JSON.stringify(results)}`);
- name: Save cached results
uses: ./../../_actions/github/accessibility-scanner/current/.github/actions/gh-cache/cache
with:
key: ${{ inputs.cache_key }}
value: ${{ steps.results.outputs.results }}
token: ${{ inputs.token }}
branding:
icon: "compass"
color: "blue"
Action ID: marketplace/samuelmeuli/action-maven-publish
Author: Samuel Meuli
Publisher: samuelmeuli
Repository: github.com/samuelmeuli/action-maven-publish
GitHub Action for automatically publishing Maven packages
| Name | Required | Description |
|---|---|---|
gpg_private_key |
Optional | GPG private key for signing the published artifacts |
gpg_passphrase |
Optional | Passphrase for the GPG key |
nexus_username |
Required | Username (not email!) for your Nexus repository manager account |
nexus_password |
Required | Password for your Nexus account |
server_id |
Optional | Nexus server ID as specified in your project's `nexus-staging-maven-plugin` and `distributionManagement` configurations Default: ossrh |
directory |
Optional | Directory of the Maven project which should be deployed |
maven_goals_phases |
Optional | Maven goals and build phases to execute Default: clean deploy |
maven_args |
Optional | Additional arguments to pass to the Maven command |
maven_profiles |
Optional | Active Maven profiles Default: deploy |
name: action-maven-publish
author: Samuel Meuli
description: GitHub Action for automatically publishing Maven packages
inputs:
gpg_private_key:
description: GPG private key for signing the published artifacts
required: false
gpg_passphrase:
description: Passphrase for the GPG key
required: false
nexus_username:
description: Username (not email!) for your Nexus repository manager account
required: true
nexus_password:
description: Password for your Nexus account
required: true
server_id:
description: Nexus server ID as specified in your project's `nexus-staging-maven-plugin` and `distributionManagement` configurations
required: false
default: ossrh
directory:
description: Directory of the Maven project which should be deployed
required: false
maven_goals_phases:
description: Maven goals and build phases to execute
required: false
default: clean deploy
maven_args:
description: Additional arguments to pass to the Maven command
required: false
default: ""
maven_profiles:
description: Active Maven profiles
required: false
default: "deploy"
runs:
using: node12
main: ./index.js
branding:
icon: upload-cloud
color: orange
Action ID: marketplace/tibdex/autosquash
Author: Thibault Derousseaux <tibdex@gmail.com>
Publisher: tibdex
Repository: github.com/tibdex/autosquash
Automatically update PRs with outdated checks and squash and merge the ones matching all branch protections.
| Name | Required | Description |
|---|---|---|
github_token |
Required | Token for the GitHub API. |
label |
Required | Name of the label used by Autosquash to find the pull requests to handle. Default: autosquash |
name: Autosquash
author: Thibault Derousseaux <tibdex@gmail.com>
description: Automatically update PRs with outdated checks and squash and merge the ones matching all branch protections.
inputs:
github_token:
description: Token for the GitHub API.
required: true
label:
default: autosquash
description: Name of the label used by Autosquash to find the pull requests to handle.
required: true
runs:
using: node12
main: dist/index.js
branding:
icon: box
color: yellow
Action ID: marketplace/codecov/codecov-ats
Author: Unknown
Publisher: codecov
Repository: github.com/codecov/codecov-ats
GitHub Action that returns selected test labels from Codecov☂️ to CI
| Name | Required | Description |
|---|---|---|
token |
Optional | Repository upload token - get it from codecov.io. Required or set it with the environment variable CODECOV_TOKEN |
static_token |
Optional | Repository static token - get it from codecov.io. Required or set it with the environment variable CODECOV_STATIC_TOKEN |
enterprise_url |
Optional | Change the upload host (Enterprise use only) |
file_pattern |
Optional | File pattern to search for. Defaults to "*" |
folders_to_exclude |
Optional | Avoid certain folders when uploading static analysis |
force |
Optional | Force upload of files during static analysis regardless if they are new |
label_max_wait_time |
Optional | Max time (in seconds) to wait for the label analysis. Default is to wait forever. |
os |
Optional | Override the assumed OS. Options are linux | macos | windows. |
output_variable |
Optional | Variable to save down tests string. Defaults to CODECOV_ATS_TESTS |
override_base_commit |
Optional | Override the assumed base commit. |
override_branch |
Optional | Branch to which this commit belongs to |
override_commit |
Optional | Commit SHA (with 40 chars) |
override_parent |
Optional | SHA (with 40 chars) of what should be the parent of this commit |
override_pr |
Optional | Specify the pull request number mannually. |
override_slug |
Optional | owner/repo slug used instead of the private repo token in Self-hosted |
static_folders_to_exclude |
Optional | Folders not to search during static analysis |
static_folder_to_search |
Optional | Folder to search during static analysis |
static_force |
Optional | Force running of static analysis |
static_number_processes |
Optional | Number of processes to use during static analysis |
static_search_pattern |
Optional | File pattern to search for during static analysis |
verbose |
Optional | Specify whether the Codecov output should be verbose |
version |
Optional | Specify which version of the Codecov CLI should be used. Defaults to `latest` |
name: 'Codecov ATS'
description: 'GitHub Action that returns selected test labels from Codecov☂️ to CI'
inputs:
token:
description: 'Repository upload token - get it from codecov.io. Required or set it with the environment variable CODECOV_TOKEN'
required: false
static_token:
description: 'Repository static token - get it from codecov.io. Required or set it with the environment variable CODECOV_STATIC_TOKEN'
required: false
enterprise_url:
description: 'Change the upload host (Enterprise use only)'
required: false
file_pattern:
description: 'File pattern to search for. Defaults to "*"'
required: false
folders_to_exclude:
description: 'Avoid certain folders when uploading static analysis'
requred: false
force:
description: 'Force upload of files during static analysis regardless if they are new'
required: false
label_max_wait_time:
description: 'Max time (in seconds) to wait for the label analysis. Default is to wait forever.'
required: false
os:
description: 'Override the assumed OS. Options are linux | macos | windows.'
required: false
output_variable:
description: 'Variable to save down tests string. Defaults to CODECOV_ATS_TESTS'
required: false
override_base_commit:
description: 'Override the assumed base commit.'
required: false
override_branch:
description: 'Branch to which this commit belongs to'
required: false
override_commit:
description: 'Commit SHA (with 40 chars)'
required: false
override_parent:
description: 'SHA (with 40 chars) of what should be the parent of this commit'
required: false
override_pr:
description: 'Specify the pull request number mannually.'
required: false
override_slug:
description: 'owner/repo slug used instead of the private repo token in Self-hosted'
required: false
static_folders_to_exclude:
description: 'Folders not to search during static analysis'
required: false
static_folder_to_search:
description: 'Folder to search during static analysis'
required: false
static_force:
description: 'Force running of static analysis'
required: false
static_number_processes:
description: 'Number of processes to use during static analysis'
required: false
static_search_pattern:
description: 'File pattern to search for during static analysis'
required: false
verbose:
description: 'Specify whether the Codecov output should be verbose'
required: false
version:
description: 'Specify which version of the Codecov CLI should be used. Defaults to `latest`'
required: false
branding:
color: 'red'
icon: 'umbrella'
runs:
using: "composite"
steps:
- id: codecov-ats
run: |
echo "${{ github.action_path }}" >> $GITHUB_PATH
echo "Running ATS Action version: $(cat ${{ github.action_path }}/dist/VERSION)"
${{ github.action_path }}/dist/codecov_ats.sh | tee codecov_ats_output.txt
if [[ $? == 0 ]]; then
echo "Codecov: Action complete. Check codecov_ats folder for results."
cat codecov_ats/result.json
else
echo "Codecov: Action failed to successfully run"
exit 1;
fi
shell: bash
env:
INPUTS_CODECOV_STATIC_TOKEN: ${{ inputs.static_token }}
INPUTS_CODECOV_TOKEN: ${{ inputs.token }}
INPUTS_ENTERPRISE_URL: ${{ inputs.enterprise_url }}
INPUTS_LABEL_MAX_WAIT_TIME: ${{ inputs.label_max_wait_time }}
INPUTS_OVERRIDE_BASE_COMMIT: ${{ inputs.override_base_commit }}
INPUTS_OVERRIDE_BRANCH: ${{ inputs.override_branch }}
INPUTS_OVERRIDE_COMMIT: ${{ inputs.override_commit }}
INPUTS_OVERRIDE_PARENT: ${{ inputs.override_parent }}
INPUTS_OVERRIDE_PR: ${{ inputs.override_pr }}
INPUTS_OVERRIDE_SLUG: ${{ inputs.override_slug }}
INPUTS_STATIC_FOLDERS_TO_EXCLUDE: ${{ inputs.static_folder_to_exclude }}
INPUTS_STATIC_FOLDER_TO_SEARCH: ${{ inputs.static_folder_to_search }}
INPUTS_STATIC_FORCE: ${{ inputs.static_force }}
INPUTS_STATIC_NUMBER_PROCESSES: ${{ inputs.static_number_processes }}
INPUTS_STATIC_SEARCH_PATTERN: ${{ inputs.static_search_pattern }}
INPUTS_VERBOSE: ${{ inputs.verbose }}
Action ID: marketplace/peter-evans/commit-comment
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/commit-comment
Create a comment for a commit on GitHub
| Name | Required | Description |
|---|---|---|
token |
Optional | The GitHub authentication token Default: ${{ github.token }} |
repository |
Optional | The full name of the target repository. Default: ${{ github.repository }} |
sha |
Optional | The commit SHA. |
path |
Optional | Relative path of the file to comment on. |
position |
Optional | Line index in the diff to comment on. |
comment-id |
Optional | The id of the comment to update. |
body |
Optional | The comment body. Cannot be used in conjunction with `body-path`. |
body-path |
Optional | The path to a file containing the comment body. Cannot be used in conjunction with `body`. |
edit-mode |
Optional | The mode when updating a comment, "replace" or "append". Default: append |
append-separator |
Optional | The separator to use when appending to an existing comment. (`newline`, `space`, `none`) Default: newline |
reactions |
Optional | A comma or newline separated list of reactions to add to the comment. |
reactions-edit-mode |
Optional | The mode when updating comment reactions, "replace" or "append". Default: append |
| Name | Description |
|---|---|
comment-id |
The id of the created commit comment |
name: 'Commit Comment'
description: 'Create a comment for a commit on GitHub'
inputs:
token:
description: 'The GitHub authentication token'
default: ${{ github.token }}
repository:
description: 'The full name of the target repository.'
default: ${{ github.repository }}
sha:
description: 'The commit SHA.'
path:
description: 'Relative path of the file to comment on.'
position:
description: 'Line index in the diff to comment on.'
comment-id:
description: 'The id of the comment to update.'
body:
description: 'The comment body. Cannot be used in conjunction with `body-path`.'
body-path:
description: 'The path to a file containing the comment body. Cannot be used in conjunction with `body`.'
edit-mode:
description: 'The mode when updating a comment, "replace" or "append".'
default: 'append'
append-separator:
description: 'The separator to use when appending to an existing comment. (`newline`, `space`, `none`)'
default: 'newline'
reactions:
description: 'A comma or newline separated list of reactions to add to the comment.'
reactions-edit-mode:
description: 'The mode when updating comment reactions, "replace" or "append".'
default: 'append'
outputs:
comment-id:
description: 'The id of the created commit comment'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'message-square'
color: 'gray-dark'
Action ID: marketplace/peter-evans/sendgrid-action
Author: Peter Evans
Publisher: peter-evans
Repository: github.com/peter-evans/sendgrid-action
Send email with SendGrid
name: 'SendGrid Action'
author: 'Peter Evans'
description: 'Send email with SendGrid'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'mail'
color: 'yellow'
Action ID: marketplace/aws-actions/aws-cloudformation-github-deploy
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/aws-cloudformation-github-deploy
Deploys a AWS CloudFormation stack
| Name | Required | Description |
|---|---|---|
name |
Required | The name of the CloudFormation stack |
template |
Required | The path or URL to the CloudFormation template |
capabilities |
Optional | The comma-delimited list of stack template capabilities to acknowledge. Defaults to 'CAPABILITY_IAM' Default: CAPABILITY_IAM |
parameter-overrides |
Optional | The parameters to override in the stack inputs. You can pass a comma-delimited list or a file URL. Comma-delimited list has each entry formatted as <ParameterName>=<ParameterValue> or <ParameterName>="<ParameterValue>,<ParameterValue>". A JSON file can be a local file with a "file://" prefix or remote URL. The file should look like: [ { "ParameterKey": "KeyPairName", "ParameterValue": "MyKey" }] |
no-execute-changeset |
Optional | Indicates whether to execute to the change set or have it reviewed. Default to '0' (will execute the change set) Default: 0 |
no-delete-failed-changeset |
Optional | Indicates whether to delete to a failed change set. Default to '0' (will delete the failed changeset) Default: 0 |
no-fail-on-empty-changeset |
Optional | If the CloudFormation change set is empty, do not fail. Defaults to '0' (will fail on empty change set) Default: 0 |
disable-rollback |
Optional | Disable rollback of the stack if stack creation fails. Defaults to '0' (will rollback if stack creation fails). This input is only used for stack creation, not for stack update Default: 0 |
timeout-in-minutes |
Optional | The amount of time that can pass before the stack status becomes CREATE_FAILED. This input is only used for stack creation, not for stack update |
notification-arns |
Optional | The comma-delimited list of Amazon SNS topic ARNs to publish stack related events |
role-arn |
Optional | The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that AWS CloudFormation assumes to create the stack. AWS CloudFormation uses the role's credentials to make calls on your behalf. AWS CloudFormation always uses this role for all future operations on the stack. As long as users have permission to operate on the stack, AWS CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants least privilege. If you don't specify a value, AWS CloudFormation uses the role that was previously associated with the stack |
tags |
Optional | Key-value pairs to associate with this stack. This input should be JSON-formatted, for example [ { "Key": "string", "Value": "string" } ] |
termination-protection |
Optional | Whether to enable termination protection on the specified stack. Defaults to '0' (terminated protection will be disabled) This input is only used for stack creation, not for stack update Default: 0 |
http-proxy |
Optional | Proxy to use for the AWS SDK agent |
change-set-name |
Optional | The name of the change set to create. Defaults to '<stack-name>-CS' |
| Name | Description |
|---|---|
stack-id |
The id of the deployed stack. In addition, any outputs declared in the deployed CloudFormation stack will also be set as outputs for the action, e.g. if the stack has a stack output named 'foo', this action will also have an output named 'foo'. |
name: 'AWS CloudFormation "Deploy CloudFormation Stack" Action for GitHub Actions'
description: "Deploys a AWS CloudFormation stack"
branding:
icon: "cloud"
color: "orange"
inputs:
name:
description: "The name of the CloudFormation stack"
required: true
template:
description: "The path or URL to the CloudFormation template"
required: true
capabilities:
description: "The comma-delimited list of stack template capabilities to acknowledge. Defaults to 'CAPABILITY_IAM'"
required: false
default: "CAPABILITY_IAM"
parameter-overrides:
description: 'The parameters to override in the stack inputs. You can pass a comma-delimited list or a file URL. Comma-delimited list has each entry formatted as <ParameterName>=<ParameterValue> or <ParameterName>="<ParameterValue>,<ParameterValue>". A JSON file can be a local file with a "file://" prefix or remote URL. The file should look like: [ { "ParameterKey": "KeyPairName", "ParameterValue": "MyKey" }]'
required: false
no-execute-changeset:
description: "Indicates whether to execute to the change set or have it reviewed. Default to '0' (will execute the change set)"
required: false
default: "0"
no-delete-failed-changeset:
description: "Indicates whether to delete to a failed change set. Default to '0' (will delete the failed changeset)"
required: false
default: "0"
no-fail-on-empty-changeset:
description: "If the CloudFormation change set is empty, do not fail. Defaults to '0' (will fail on empty change set)"
required: false
default: "0"
disable-rollback:
description: "Disable rollback of the stack if stack creation fails. Defaults to '0' (will rollback if stack creation fails). This input is only used for stack creation, not for stack update"
required: false
default: "0"
timeout-in-minutes:
description: "The amount of time that can pass before the stack status becomes CREATE_FAILED. This input is only used for stack creation, not for stack update"
required: false
notification-arns:
description: "The comma-delimited list of Amazon SNS topic ARNs to publish stack related events"
required: false
role-arn:
description: "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that AWS CloudFormation assumes to create the stack. AWS CloudFormation uses the role's credentials to make calls on your behalf. AWS CloudFormation always uses this role for all future operations on the stack. As long as users have permission to operate on the stack, AWS CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants least privilege. If you don't specify a value, AWS CloudFormation uses the role that was previously associated with the stack"
required: false
tags:
description: 'Key-value pairs to associate with this stack. This input should be JSON-formatted, for example [ { "Key": "string", "Value": "string" } ]'
required: false
termination-protection:
description: "Whether to enable termination protection on the specified stack. Defaults to '0' (terminated protection will be disabled) This input is only used for stack creation, not for stack update"
required: false
default: "0"
http-proxy:
description: 'Proxy to use for the AWS SDK agent'
required: false
change-set-name:
description: "The name of the change set to create. Defaults to '<stack-name>-CS'"
required: false
outputs:
stack-id:
description: "The id of the deployed stack. In addition, any outputs declared in the deployed CloudFormation stack will also be set as outputs for the action, e.g. if the stack has a stack output named 'foo', this action will also have an output named 'foo'."
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/lukka/get-vcpkg
Author: Luca Cappa https://github.com/lukka
Publisher: lukka
Repository: github.com/lukka/get-vcpkg
Run vcpkg to build C/C++ dependencies and cache them automatically.
| Name | Required | Description |
|---|---|---|
vpkg_root_path |
Optional | Specify a directory where vcpkg is already installed. Default: ${{ github.workspace }}/vcpkg |
vcpkgjson_path |
Optional | Specify the full path to vcpkg.json. Default: ${{ github.workspace }}/vcpkg.json |
appended_cache_key |
Optional | This value is added to the precomputed key used to restore/save the cached artifacts produced by vcpkg. |
build_command |
Required | The full command to build, e.g. cmake -S ./src -B ./build && cmake --build ./build |
| Name | Description |
|---|---|
cache_key |
The key of the newly created cache entry. |
vcpkg_root |
The path to vcpkg. |
# Copyright (c) 2021 Luca Cappa
# Released under the term specified in file LICENSE.txt
# SPDX short identifier: MIT
name: 'get-vcpkg'
description: 'Run vcpkg to build C/C++ dependencies and cache them automatically.'
author: 'Luca Cappa https://github.com/lukka'
inputs:
# vcpkg must be already checked out. Two options:
# 1. Highly suggested: use vcpkg as submodule of your repository, for example under <repo-root>/vcpkg/.
# - uses: lukka/get-vcpkg
# # This is useless as it is the default, but shown here anyway:
# with:
# ${{ github.workspace }}/vcpkg
# 2. Another option is to checkout it prior this action, e.g.,:
# - uses: actions/checkout@v2
# with:
# repository: myname/myvcpkg
# path: vcpkg
vpkg_root_path:
required: false
description: "Specify a directory where vcpkg is already installed."
default: ${{ github.workspace }}/vcpkg
vcpkgjson_path:
required: false
description: "Specify the full path to vcpkg.json."
default: ${{ github.workspace }}/vcpkg.json
appended_cache_key:
required: false
description: "This value is added to the precomputed key used to restore/save the cached artifacts produced by vcpkg."
build_command:
required: true
description: "The full command to build, e.g. cmake -S ./src -B ./build && cmake --build ./build"
outputs:
cache_key:
description: "The key of the newly created cache entry."
vcpkg_root:
description: "The path to vcpkg."
runs:
using: "composite"
steps:
- name: Set VCPKG_DEFAULT_BINARY_CACHE and VCPKG_ROOT
shell: bash
run: |
echo "VCPKG_DEFAULT_BINARY_CACHE=${{ inputs.vpkg_root_path }}/cache" >> $GITHUB_ENV
echo "VCPKG_ROOT=${{ inputs.vpkg_root_path }}" >> $GITHUB_ENV
- name: Set VCPKG_COMMITID
shell: bash
run: |
echo "VCPKG_COMMITID=$(git -C $VCPKG_ROOT rev-parse --short HEAD)" >> $GITHUB_ENV
- name: Set VCPKG_CACHE_KEY
shell: bash
run: |
echo "VCPKG_CACHE_KEY=${{ hashFiles( inputs.vcpkgjson_path ) }}-$VCPKG_COMMITID-$ImageVersion-$ImageOS" >> $GITHUB_ENV
- id: set-output
name: Dump variables
shell: bash
run: |
echo "VCPKG_ROOT=$VCPKG_ROOT"
echo "VCPKG_DEFAULT_BINARY_CACHE=$VCPKG_DEFAULT_BINARY_CACHE"
mkdir -p $VCPKG_DEFAULT_BINARY_CACHE
echo "VCPKG_COMMITID=$VCPKG_COMMITID"
echo "VCPKG_CACHE_KEY=$VCPKG_CACHE_KEY"
echo "::set-output name=cache_key::$CACHE_KEY"
echo "::set-output name=vcpkg_root::$VCPKG_ROOT"
# Restore both vcpkg and its artifacts from the GitHub cache service.
# simulating: uses: actions/cache@v2
# with:
# # The first path is the location of vcpkg (it contains the vcpkg executable and data files).
# # The second path is where vcpkg generates artifacts while consuming the vcpkg.json manifest file.
# # The other paths starting with '!' are exclusions: they contain termporary files generated during the build of the installed packages.
# path: |
# ${{ env.VCPKG_ROOT }}
# !${{ env.VCPKG_ROOT }}/buildtrees
# !${{ env.VCPKG_ROOT }}/packages
# !${{ env.VCPKG_ROOT }}/downloads
# !${{ env.VCPKG_ROOT }}/installed
# # The key is composed in a way that it gets properly invalidated: this must happen whenever vcpkg's Git commit id changes, or the list of packages changes. In this case a cache miss must happen and a new entry with a new key with be pushed to GitHub the cache service.
# # The key includes: hash of the vcpkg.json file, the hash of the vcpkg Git commit id, and the used vcpkg's triplet. The vcpkg's commit id would suffice, but computing an hash out it does not harm.
# # Note: given a key, the cache content is immutable. If a cache entry has been created improperly, in order the recreate the right content the key must be changed as well, and it must be brand new (i.e. not existing already).
# key: ${{ env.VCPKG_CACHE_KEY }}
- name: Restore vcpkg and its artifacts.
shell: bash
run: |
echo "::group::Restoring cache ..."
npm install
which node
node --version
node ${{ github.workspace }}/restore.js
echo "::endgroup::"
#- name: Show content of workspace after cache has been restored
# run: find $RUNNER_WORKSPACE
# shell: bash
- name: Build.
shell: bash
run: |
echo "::group::Building ..."
echo "Executing the build command: '${{ inputs.build_command }}'"
eval "${{ inputs.build_command}}"
echo "::endgroup::"
- name: Save vcpkg and its artifacts.
shell: bash
run: |
echo "::group::Saving cache ..."
npm install
which node
node --version
node ${{ github.workspace }}/save.js
echo "::endgroup::"
Action ID: marketplace/azure/acr-build
Author: Alessandro Vozza
Publisher: azure
Repository: github.com/azure/acr-build
Use ACR to build a container image
| Name | Required | Description |
|---|---|---|
service_principal |
Required | Service Principal with Contributor role on the ACR |
service_principal_password |
Required | Service Principal password |
tenant |
Required | Azure Container Registry tenant |
registry |
Required | The name of the ACR, minus the .azurecr.io |
repository |
Required | Repository to use |
git_access_token |
Required | Github access token for private repositories |
image |
Optional | Docker image name |
tag |
Optional | Docker image tag, default to the commit SHA |
branch |
Optional | Branch to build from, defaults to master |
folder |
Required | The folder in the Github repo that holds the source |
dockerfile |
Optional | The location of the Dockerfile; defaults to ./Dockerfile |
build_args |
Optional | JSON specifying key=value pairs as as Docker build arguments |
name: "Azure Container Registry Build"
author: "Alessandro Vozza"
branding:
icon: "code"
color: "blue"
description: "Use ACR to build a container image"
inputs:
service_principal:
description: "Service Principal with Contributor role on the ACR"
required: true
service_principal_password:
description: "Service Principal password"
required: true
tenant:
description: "Azure Container Registry tenant"
required: true
registry:
description: "The name of the ACR, minus the .azurecr.io"
required: true
repository:
description: "Repository to use"
required: true
git_access_token:
description: 'Github access token for private repositories'
required: true
image:
description: "Docker image name"
required: false
tag:
description: "Docker image tag, default to the commit SHA"
required: false
branch:
description: "Branch to build from, defaults to master"
required: false
folder:
description: "The folder in the Github repo that holds the source"
required: true
dockerfile:
description: "The location of the Dockerfile; defaults to ./Dockerfile"
required: false
build_args:
description: "JSON specifying key=value pairs as as Docker build arguments"
required: false
runs:
using: "docker"
image: "Dockerfile"
Action ID: marketplace/dflook/tofu-destroy
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-destroy
Destroys all resources in an OpenTofu workspace
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu root module directory. Default: . |
workspace |
Optional | The name of the OpenTofu workspace to destroy. Default: default |
variables |
Optional | Variables to set for the tofu destroy. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files). Variables set here override any given in `var_file`s. |
var_file |
Optional | List of tfvars files to use, one per line. Paths should be relative to the GitHub Actions workspace |
backend_config |
Optional | List of OpenTofu backend config values, one per line. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace |
parallelism |
Optional | Limit the number of concurrent operations Default: 0 |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure`, this output may be set. The value may be one of: - `destroy-failed` - The OpenTofu destroy operation failed. - `state-locked` - The OpenTofu state lock could not be obtained because it was already locked. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a steps. |
lock-info |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set. It is a json object containing any available state lock information and typically has the form: ```json { "ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880", "Path": "terraform-github-actions/test-unlock-state", "Operation": "OperationTypeApply", "Who": "root@e9d43b0c6478", "Version": "1.3.7", "Created": "2023-01-28 00:16:41.560904373 +0000 UTC", "Info": "" } ``` |
name: tofu-destroy
description: Destroys all resources in an OpenTofu workspace
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu root module directory.
required: false
default: "."
workspace:
description: The name of the OpenTofu workspace to destroy.
required: false
default: "default"
variables:
description: |
Variables to set for the tofu destroy. This should be valid OpenTofu syntax - like a [variable definition file](https://opentofu.org/docs/language/values/variables/#variable-definitions-tfvars-files).
Variables set here override any given in `var_file`s.
required: false
var_file:
description: |
List of tfvars files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
backend_config:
description: List of OpenTofu backend config values, one per line.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
parallelism:
description: Limit the number of concurrent operations
required: false
default: "0"
outputs:
failure-reason:
description: |
When the job outcome is `failure`, this output may be set. The value may be one of:
- `destroy-failed` - The OpenTofu destroy operation failed.
- `state-locked` - The OpenTofu state lock could not be obtained because it was already locked.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a steps.
lock-info:
description: |
When the job outcome is `failure` and the failure-reason is `state-locked`, this output will be set.
It is a json object containing any available state lock information and typically has the form:
```json
{
"ID": "838fbfde-c5cd-297f-84a4-d7578b4a4880",
"Path": "terraform-github-actions/test-unlock-state",
"Operation": "OperationTypeApply",
"Who": "root@e9d43b0c6478",
"Version": "1.3.7",
"Created": "2023-01-28 00:16:41.560904373 +0000 UTC",
"Info": ""
}
```
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/destroy.sh
branding:
icon: globe
color: purple
Action ID: marketplace/amirisback/consumable-code-the-meal-db-api
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/consumable-code-the-meal-db-api
Retrofit has been Handled, Consumable code for request Public API (The Meal DB API)
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Consumable Code The Meal DB API'
description: 'Retrofit has been Handled, Consumable code for request Public API (The Meal DB API)'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/amirisback/android-custom-search-view
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-custom-search-view
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/actions/maven-dependency-submission-action
Author: Unknown
Publisher: actions
Repository: github.com/actions/maven-dependency-submission-action
A GitHub Action for Maven project to submit a complete dependency tree to populate the GitHub Dependency Graph
| Name | Required | Description |
|---|---|---|
directory |
Optional | The directory that contains the maven project Default: ${{ github.workspace }} |
settings-file |
Optional | Optional path to a Maven settings.xml file for the dependencies to be resolved |
ignore-maven-wrapper |
Optional | Flag for optionally ignoring any maven wrapper files (mvnw) and instead rely on the PATH provided mvn |
maven-args |
Optional | Additional maven arguments to add to the command line invocation of maven when it generates the dependency snapshot |
token |
Optional | The GitHub token to use to submit the depedency snapshot to the repository Default: ${{ github.token }} |
snapshot-sha |
Optional | The SHA that the results will be linked to in the dependency snapshot |
snapshot-ref |
Optional | The ref that the results will be linked to in the dependency snapshot |
detector-name |
Optional | The name of the detector that generated the dependency snapshot |
detector-version |
Optional | The version of the detector that generated the dependency snapshot |
detector-url |
Optional | The URL to the detector that generated the dependency snapshot |
correlator |
Optional | An optional identifier to distinguish between multiple dependency snapshots of the same type |
name: Maven Dependency Tree Dependency Submission
description: A GitHub Action for Maven project to submit a complete dependency tree to populate the GitHub Dependency Graph
branding:
icon: feather
color: green
inputs:
directory:
description: The directory that contains the maven project
type: string
default: ${{ github.workspace }}
settings-file:
description: Optional path to a Maven settings.xml file for the dependencies to be resolved
type: string
ignore-maven-wrapper:
description: Flag for optionally ignoring any maven wrapper files (mvnw) and instead rely on the PATH provided mvn
type: boolean
default: false
maven-args:
description: Additional maven arguments to add to the command line invocation of maven when it generates the dependency snapshot
type: string
default: ''
token:
description: The GitHub token to use to submit the depedency snapshot to the repository
type: string
default: ${{ github.token }}
snapshot-sha:
description: The SHA that the results will be linked to in the dependency snapshot
type: string
required: false
default: ''
snapshot-ref:
description: The ref that the results will be linked to in the dependency snapshot
type: string
required: false
default: ''
detector-name:
description: The name of the detector that generated the dependency snapshot
type: string
detector-version:
description: The version of the detector that generated the dependency snapshot
type: string
detector-url:
description: The URL to the detector that generated the dependency snapshot
type: string
correlator:
description: An optional identifier to distinguish between multiple dependency snapshots of the same type
type: string
required: false
default: ''
runs:
using: node20
main: dist/index.js
Action ID: marketplace/github/privileged-requester
Author: Unknown
Publisher: github
Repository: github.com/github/privileged-requester
Checks a PR against configurable criteria to determine whether or not the PR should be automatically approved
| Name | Required | Description |
|---|---|---|
github_token |
Required | 'The GitHub token used to create an authenticated client - Provided for you by default! - Repository scoped token'
You can use the default provided token or you can provide a PAT as an alternative robot user token
Default: ${{ github.token }} |
handle |
Required | When using the default github.token (from above), the "handle" is fetched from this input since the token is repository scoped and it cannot even read its own handle. You should not need to change this input. Default: github-actions[bot] |
path |
Required | Path where the privileged requester configuration can be found Default: config/privileged-requester.yaml |
prCreator |
Required | The creator of the PR for this pull request event Default: ${{ github.event.pull_request.user.login }} |
prNumber |
Required | The number of the PR for this pull request event Default: ${{ github.event.pull_request.number }} |
checkCommits |
Required | An option to check that every commit in the PR is made from the privileged requester Default: true |
checkDiff |
Required | An option to check that the PR diff only has a removal diff, with no additions Default: true |
checkLabels |
Required | An option to check that the labels on the PR match those defined in the privileged requester config Default: true |
commitVerification |
Required | Whether or not to validate all commits have proper verification via GPG signed commits Default: false |
fallback_to_commit_author |
Required | Whether or not to fallback to the commit author value if the commit login value is missing Default: false |
| Name | Description |
|---|---|
approved |
Whether or not the PR was approved - 'true' or 'false' |
commits_verified |
The string "true" if all commits in the PR are signed/verified |
name: 'Privileged Requester'
description: 'Checks a PR against configurable criteria to determine whether or not the PR should be automatically approved'
branding:
icon: 'check'
color: 'green'
inputs:
github_token:
description: |
'The GitHub token used to create an authenticated client - Provided for you by default! - Repository scoped token'
You can use the default provided token or you can provide a PAT as an alternative robot user token
required: true
default: ${{ github.token }}
handle:
description: 'When using the default github.token (from above), the "handle" is fetched from this input since the token is repository scoped and it cannot even read its own handle. You should not need to change this input.'
required: true
default: 'github-actions[bot]'
path:
description: 'Path where the privileged requester configuration can be found'
required: true
default: config/privileged-requester.yaml
prCreator:
description: 'The creator of the PR for this pull request event'
required: true
default: ${{ github.event.pull_request.user.login }}
prNumber:
description: 'The number of the PR for this pull request event'
required: true
default: ${{ github.event.pull_request.number }}
checkCommits:
description: 'An option to check that every commit in the PR is made from the privileged requester'
required: true
default: 'true'
checkDiff:
description: 'An option to check that the PR diff only has a removal diff, with no additions'
required: true
default: 'true'
checkLabels:
description: 'An option to check that the labels on the PR match those defined in the privileged requester config'
required: true
default: 'true'
commitVerification:
description: 'Whether or not to validate all commits have proper verification via GPG signed commits'
required: true
default: 'false'
fallback_to_commit_author:
description: 'Whether or not to fallback to the commit author value if the commit login value is missing'
required: true
default: 'false'
outputs:
approved: # output will be available to future steps
description: "Whether or not the PR was approved - 'true' or 'false'"
commits_verified:
description: The string "true" if all commits in the PR are signed/verified
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/actions/publish-action
Author: Unknown
Publisher: actions
Repository: github.com/actions/publish-action
Move the major version tag to point to a specified ref
| Name | Required | Description |
|---|---|---|
source-tag |
Required | Tag name that the major tag will point to. Examples: v1.2.3, 1.2.3 |
slack-webhook |
Optional | Slack Webhook URL to post a message |
token |
Optional | Token to get an authenticated Octokit Default: ${{ github.token }} |
| Name | Description |
|---|---|
major-tag |
The major version tag that has been updated (created). Examples: v1, 1 |
name: 'Publish action versions'
description: 'Move the major version tag to point to a specified ref'
inputs:
source-tag:
description: 'Tag name that the major tag will point to. Examples: v1.2.3, 1.2.3'
required: true
slack-webhook:
description: 'Slack Webhook URL to post a message'
token:
description: 'Token to get an authenticated Octokit'
default: ${{ github.token }}
outputs:
major-tag:
description: 'The major version tag that has been updated (created). Examples: v1, 1'
runs:
using: 'node24'
main: 'dist/index.js'
Action ID: marketplace/amirisback/android-research-tech-deprecated
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-research-tech-deprecated
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/mheap/setup-deck
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/setup-deck
Install decK from Kong
| Name | Required | Description |
|---|---|---|
deck-version |
Optional | The version of decK to install |
token |
Optional | The GitHub token to use when fetching the latest version of deck Default: ${{ github.token }} |
wrapper |
Optional | Add a wrapper script to make stdout, stderr and errorcode available as outputs Default: false |
name: Setup decK
description: Install decK from Kong
runs:
using: node12
main: dist/index.js
inputs:
deck-version:
description: The version of decK to install
required: false
token:
description: The GitHub token to use when fetching the latest version of deck
default: ${{ github.token }}
required: false
wrapper:
description: Add a wrapper script to make stdout, stderr and errorcode available as outputs
default: "false"
required: false
Action ID: marketplace/peter-evans/create-issue-from-file
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/create-issue-from-file
An action to create an issue using content from a file
| Name | Required | Description |
|---|---|---|
token |
Optional | The GitHub authentication token Default: ${{ github.token }} |
repository |
Optional | The target GitHub repository Default: ${{ github.repository }} |
issue-number |
Optional | The issue number of an existing issue to update |
title |
Required | The title of the issue |
content-filepath |
Optional | The file path to the issue content |
labels |
Optional | A comma or newline-separated list of labels |
assignees |
Optional | A comma or newline-separated list of assignees (GitHub usernames) |
| Name | Description |
|---|---|
issue-number |
The number of the created issue |
name: 'Create Issue From File'
description: 'An action to create an issue using content from a file'
inputs:
token:
description: 'The GitHub authentication token'
default: ${{ github.token }}
repository:
description: 'The target GitHub repository'
default: ${{ github.repository }}
issue-number:
description: 'The issue number of an existing issue to update'
title:
description: 'The title of the issue'
required: true
content-filepath:
description: 'The file path to the issue content'
labels:
description: 'A comma or newline-separated list of labels'
assignees:
description: 'A comma or newline-separated list of assignees (GitHub usernames)'
outputs:
issue-number:
description: 'The number of the created issue'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'alert-circle'
color: 'orange'
Action ID: marketplace/google-github-actions/deploy-appengine
Author: Google LLC
Publisher: google-github-actions
Repository: github.com/google-github-actions/deploy-appengine
Deploy and promote a new service to Google App Engine.
| Name | Required | Description |
|---|---|---|
project_id |
Optional | ID of the Google Cloud project. If not provided, this is inherited from the environment. |
working_directory |
Optional | The working directory to use. **GitHub Actions do not honor [default working-directory settings](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrun).** The `deliverables` input is a relative path based on this setting. |
deliverables |
Optional | The [yaml files](https://cloud.google.com/appengine/docs/standard/nodejs/configuration-files#optional_configuration_files) for the services or configurations you want to deploy. If not given, defaults to app.yaml in the current directory. If that is not found, attempts to automatically generate necessary configuration files (such as app.yaml) in the current directory (example, `app.yaml cron.yaml`). Note: The additional deliverables may require additional roles for your service account user. |
build_env_vars |
Optional | List of build environment variables that should be set in the build environment. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml build_env_vars: |- FRUIT=apple SENTENCE=" this will retain leading and trailing spaces " ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. To include build environment variables defined in another file, use the [`includes` directive][includes-directive] in your `app.yaml`. This will overwrite any duplicate key environment variables defined in the `app.yaml`. |
env_vars |
Optional | List of environment variables that should be set in the environment. These are comma-separated or newline-separated `KEY=VALUE`. Keys or values that contain separators must be escaped with a backslash (e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is trimmed unless values are quoted. ```yaml env_vars: |- FRUIT=apple SENTENCE=" this will retain leading and trailing spaces " ``` This value will only be set if the input is a non-empty value. If a non-empty value is given, the field values will be overwritten (not merged). To remove all values, set the value to the literal string `{}`. To include environment variables defined in another file, use the [`includes` directive][includes-directive] in your `app.yaml`. This will overwrite any duplicate key environment variables defined in the `app.yaml`. |
image_url |
Optional | Fully-qualified name of the container image to deploy. For example: us-docker.pkg.dev/cloudrun/container/hello:latest or us-docker.pkg.dev/my-project/my-container/image:1.2.3 |
version |
Optional | The version of the app that will be created or replaced by this deployment. If you do not specify a version, one will be generated for you. |
promote |
Optional | Promote the deployed version to receive all traffic. Default: true |
flags |
Optional | Space separate list of additional Cloud Functions flags to pass to the deploy command. This can be used to apply advanced features that are not exposed via this GitHub Action. ```yaml with: flags: '--ignore-file=...' ``` Flags that include other flags must quote the _entire_ outer flag value. For example, to pass `--args=-X=123`: ```yaml with: flags: 'flags: '--ignore-file=...' "--args=-X=123"' ``` See the [complete list of flags](https://cloud.google.com/sdk/gcloud/reference/app/deploy#FLAGS) for more information. Please note, this GitHub Action does not parse or validate the flags. You are responsible for making sure the flags are available on the gcloud version and subcommand. |
gcloud_version |
Optional | Version of the Cloud SDK to install. If unspecified or set to "latest", the latest available gcloud SDK version for the target platform will be installed. Example: "290.0.1". |
gcloud_component |
Optional | Version of the Cloud SDK components to install and use. If unspecified, the latest or released version will be used. This is the equivalent of running 'gcloud alpha COMMAND' or 'gcloud beta COMMAND'. Valid values are `alpha` or `beta`. The default value is to use the stable track. |
| Name | Description |
|---|---|
name |
The fully-qualified resource name of the deployment. This will be of the format `apps/[PROJECT]/services/[SERVICE]/versions/[VERSION]`. |
runtime |
The computed deployment runtime. |
service_account_email |
The email address of the runtime service account. |
serving_status |
The current serving status. The value is usually "SERVING", unless the deployment failed to start. |
version_id |
Unique identifier for the version, or the specified version if one was given. |
version_url |
URL of the version of the AppEngine service that was deployed. |
name: 'Deploy to App Engine'
author: 'Google LLC'
description: |-
Deploy and promote a new service to Google App Engine.
inputs:
project_id:
description: |-
ID of the Google Cloud project. If not provided, this is inherited from
the environment.
required: false
working_directory:
description: |-
The working directory to use. **GitHub Actions do not honor [default
working-directory
settings](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrun).**
The `deliverables` input is a relative path based on this setting.
required: false
deliverables:
description: |-
The [yaml
files](https://cloud.google.com/appengine/docs/standard/nodejs/configuration-files#optional_configuration_files)
for the services or configurations you want to deploy. If not given,
defaults to app.yaml in the current directory. If that is not found,
attempts to automatically generate necessary configuration files (such as
app.yaml) in the current directory (example, `app.yaml cron.yaml`).
Note: The additional deliverables may require additional roles for your
service account user.
required: false
build_env_vars:
description: |-
List of build environment variables that should be set in the build
environment. These are comma-separated or newline-separated `KEY=VALUE`.
Keys or values that contain separators must be escaped with a backslash
(e.g. `\,` or `\\n`) unless quoted. Any leading or trailing whitespace is
trimmed unless values are quoted.
```yaml
build_env_vars: |-
FRUIT=apple
SENTENCE=" this will retain leading and trailing spaces "
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
To include build environment variables defined in another file, use the
[`includes` directive][includes-directive] in your `app.yaml`.
This will overwrite any duplicate key environment variables defined in the
`app.yaml`.
required: false
env_vars:
description: |-
List of environment variables that should be set in the environment. These
are comma-separated or newline-separated `KEY=VALUE`. Keys or values that
contain separators must be escaped with a backslash (e.g. `\,` or `\\n`)
unless quoted. Any leading or trailing whitespace is trimmed unless values
are quoted.
```yaml
env_vars: |-
FRUIT=apple
SENTENCE=" this will retain leading and trailing spaces "
```
This value will only be set if the input is a non-empty value. If a
non-empty value is given, the field values will be overwritten (not
merged). To remove all values, set the value to the literal string `{}`.
To include environment variables defined in another file, use the
[`includes` directive][includes-directive] in your `app.yaml`.
This will overwrite any duplicate key environment variables defined in the
`app.yaml`.
required: false
image_url:
description: |-
Fully-qualified name
of the container image to deploy. For example:
us-docker.pkg.dev/cloudrun/container/hello:latest
or
us-docker.pkg.dev/my-project/my-container/image:1.2.3
required: false
version:
description: |-
The version of the app that will be created or replaced by this
deployment. If you do not specify a version, one will be generated for
you.
required: false
promote:
description: |-
Promote the deployed version to receive all traffic.
required: false
default: 'true'
flags:
description: |-
Space separate list of additional Cloud Functions flags to pass to the
deploy command. This can be used to apply advanced features that are not
exposed via this GitHub Action.
```yaml
with:
flags: '--ignore-file=...'
```
Flags that include other flags must quote the _entire_ outer flag value. For
example, to pass `--args=-X=123`:
```yaml
with:
flags: 'flags: '--ignore-file=...' "--args=-X=123"'
```
See the [complete list of
flags](https://cloud.google.com/sdk/gcloud/reference/app/deploy#FLAGS) for
more information.
Please note, this GitHub Action does not parse or validate the flags. You
are responsible for making sure the flags are available on the gcloud
version and subcommand.
required: false
gcloud_version:
description: |-
Version of the Cloud SDK to install. If unspecified or set to "latest",
the latest available gcloud SDK version for the target platform will be
installed. Example: "290.0.1".
required: false
gcloud_component:
description: |-
Version of the Cloud SDK components to install and use. If unspecified,
the latest or released version will be used. This is the equivalent of
running 'gcloud alpha COMMAND' or 'gcloud beta COMMAND'. Valid values are
`alpha` or `beta`. The default value is to use the stable track.
required: false
outputs:
name:
description: |-
The fully-qualified resource name of the deployment. This will be of the
format `apps/[PROJECT]/services/[SERVICE]/versions/[VERSION]`.
runtime:
description: |-
The computed deployment runtime.
service_account_email:
description: |-
The email address of the runtime service account.
serving_status:
description: |-
The current serving status. The value is usually "SERVING", unless the
deployment failed to start.
version_id:
description: |-
Unique identifier for the version, or the specified version if one was
given.
version_url:
description: |-
URL of the version of the AppEngine service that was deployed.
branding:
icon: 'code'
color: 'blue'
runs:
using: 'node24'
main: 'dist/main/index.js'
Action ID: marketplace/peter-evans/jira2md
Author: Unknown
Publisher: peter-evans
Repository: github.com/peter-evans/jira2md
A GitHub action to convert between Jira text formatting and GitHub flavored markdown
| Name | Required | Description |
|---|---|---|
input-text |
Required | The input text to convert |
mode |
Optional | The conversion mode; `jira2md`, `md2jira`, `md2html` or `jira2html` Default: jira2md |
| Name | Description |
|---|---|
output-text |
The converted text |
name: 'jira2md'
description: 'A GitHub action to convert between Jira text formatting and GitHub flavored markdown'
inputs:
input-text:
description: 'The input text to convert'
required: true
mode:
description: 'The conversion mode; `jira2md`, `md2jira`, `md2html` or `jira2html`'
default: jira2md
outputs:
output-text:
description: 'The converted text'
runs:
using: 'node24'
main: 'dist/index.js'
branding:
icon: 'activity'
color: 'blue'
Action ID: marketplace/dokku/semver-generator
Author: Dokku
Publisher: dokku
Repository: github.com/dokku/semver-generator
Github Action for generating a semver version
| Name | Required | Description |
|---|---|---|
bump |
Required | The type of bump to perform (major, minor, patch) |
input |
Required | The input version to bump |
| Name | Description |
|---|---|
version |
The computed version |
---
name: "semver-generator"
description: "Github Action for generating a semver version"
author: "Dokku"
branding:
icon: "tag"
color: "blue"
inputs:
bump:
description: "The type of bump to perform (major, minor, patch)"
required: true
input:
description: "The input version to bump"
required: true
runs:
using: "docker"
image: "Dockerfile"
entrypoint: "/usr/local/bin/github-entrypoint"
outputs:
version:
description: "The computed version"
Action ID: marketplace/ammaraskar/mypy
Author: Jukka Lehtosalo and contributors
Publisher: ammaraskar
Repository: github.com/ammaraskar/mypy
Optional Static Typing for Python.
| Name | Required | Description |
|---|---|---|
options |
Optional | Options passed to mypy. Use `mypy --help` to see available options. |
paths |
Optional | Explicit paths to run mypy on. Defaults to the current directory.
Default: . |
version |
Optional | Mypy version to use (PEP440) - e.g. "0.910" |
install_types |
Optional | Whether to automatically install missing library stub packages. ('yes'|'no', default: 'yes')
Default: yes |
install_project_dependencies |
Optional | Whether to attempt to install project dependencies into mypy environment. ('yes'|'no', default: 'yes')
Default: yes |
name: "Mypy"
description: "Optional Static Typing for Python."
author: "Jukka Lehtosalo and contributors"
inputs:
options:
description: >
Options passed to mypy. Use `mypy --help` to see available options.
required: false
paths:
description: >
Explicit paths to run mypy on. Defaults to the current directory.
required: false
default: "."
version:
description: >
Mypy version to use (PEP440) - e.g. "0.910"
required: false
default: ""
install_types:
description: >
Whether to automatically install missing library stub packages.
('yes'|'no', default: 'yes')
default: "yes"
install_project_dependencies:
description: >
Whether to attempt to install project dependencies into mypy
environment. ('yes'|'no', default: 'yes')
default: "yes"
branding:
color: "blue"
icon: "check-circle"
runs:
using: composite
steps:
- name: mypy setup # zizmor: ignore[template-injection]
shell: bash
run: |
echo ::group::Installing mypy...
export PIP_DISABLE_PIP_VERSION_CHECK=1
if [ "$RUNNER_OS" == "Windows" ]; then
HOST_PYTHON=python
else
HOST_PYTHON=python3
fi
venv_script="import os.path; import venv; import sys;
path = os.path.join(r'${{ github.action_path }}', '.mypy-venv');
venv.main([path]);
bin_subdir = 'Scripts' if sys.platform == 'win32' else 'bin';
print(os.path.join(path, bin_subdir, 'python'));
"
VENV_PYTHON=$(echo $venv_script | "$HOST_PYTHON")
mypy_spec="mypy"
if [ -n "${{ inputs.version }}" ]; then
mypy_spec+="==${{ inputs.version }}"
fi
if ! "$VENV_PYTHON" -m pip install "$mypy_spec"; then
echo "::error::Could not install mypy."
exit 1
fi
echo ::endgroup::
if [ "${{ inputs.install_project_dependencies }}" == "yes" ]; then
VENV=$("$VENV_PYTHON" -c 'import sys;print(sys.prefix)')
echo ::group::Installing project dependencies...
"$VENV_PYTHON" -m pip download --dest="$VENV"/deps .
"$VENV_PYTHON" -m pip install -U --find-links="$VENV"/deps "$VENV"/deps/*
echo ::endgroup::
fi
echo ::group::Running mypy...
mypy_opts=""
if [ "${{ inputs.install_types }}" == "yes" ]; then
mypy_opts+="--install-types --non-interactive"
fi
echo "mypy $mypy_opts ${{ inputs.options }} ${{ inputs.paths }}"
"$VENV_PYTHON" -m mypy $mypy_opts ${{ inputs.options }} ${{ inputs.paths }}
echo ::endgroup::
Action ID: marketplace/azure/azure-sdk-actions
Author: Unknown
Publisher: azure
Repository: github.com/azure/azure-sdk-actions
Handle GitHub event
| Name | Required | Description |
|---|---|---|
token |
Required | GitHub Event Token |
name: 'Azure SDK GitHub Event Handler'
description: 'Handle GitHub event'
inputs:
token:
description: "GitHub Event Token"
required: true
runs:
using: "composite"
steps:
- run: |
cd $GITHUB_ACTION_PATH
go run . ${{ github.event_path }}
shell: bash
env:
GITHUB_TOKEN: ${{ inputs.token }}
- name: Archive github event data
uses: actions/upload-artifact@v4
if: always()
with:
name: event
path: ${{ github.event_path }}
Action ID: marketplace/techpivot/terraform-module-releaser
Author: TechPivot
Publisher: techpivot
Repository: github.com/techpivot/terraform-module-releaser
Automate versioning, releases, and documentation for Terraform modules in GitHub monorepos.
| Name | Required | Description |
|---|---|---|
major-keywords |
Required | Keywords in commit messages that indicate a major release. Default: major change,breaking change |
minor-keywords |
Required | Keywords in commit messages that indicate a minor release. Default: feat,feature |
patch-keywords |
Required | Keywords in commit messages that indicate a patch release. By default, everything will be a patch release if major or minor keywords are not found.
Default: fix,chore,docs |
default-first-tag |
Required | Specifies the default tag version. (Should be in format v#.#.#) Default: v1.0.0 |
terraform-docs-version |
Required | Specifies the terraform-docs version used to generate documentation for the wiki.
See: https://github.com/terraform-docs/terraform-docs/releases
Default: v0.20.0 |
delete-legacy-tags |
Required | Specifies a boolean that determines whether tags from Terraform modules that have been deleted should be automatically removed. By default this is true as the purpose of the repository is to keep releases/tags clean. When removing a module, this will ensure the tags/releases are automatically cleaned.
Default: true |
disable-wiki |
Required | Whether to disable wiki generation for Terraform modules. By default, this is set to false. Set to true to prevent wiki documentation from being generated.
Default: false |
wiki-sidebar-changelog-max |
Required | An integer that specifies how many changelog entries are displayed in the sidebar per module. Adjust this value to control the visibility of changelog entries in the module sidebar.
Default: 5 |
wiki-usage-template |
Optional | A raw, multi-line string to override the default 'Usage' section in the generated wiki. If not provided, a default usage block will be generated. Default: To use this module in your Terraform, refer to the below module example:
```hcl
module "{{module_name_terraform}}" {
source = "{{module_source}}?ref={{ref}}"{{ref_comment}}
# See inputs below for additional required parameters
}
```
|
disable-branding |
Required | Flag to control whether the small branding link should be disabled or not in the pull request (PR) comments. When branding is enabled, a link to the action's repository is added at the bottom of comments. Setting this flag to "true" will remove that link. Useful for cleaner PR comments in enterprise environments or where third-party branding is undesirable.
Default: false |
module-path-ignore |
Optional | A comma-separated list of module paths to completely ignore during processing. Paths matching these patterns will not be considered for versioning, releases, or documentation generation. These patterns follow glob syntax and are relative to the repository root. Use this to exclude example modules, test modules, or other paths that should not be treated as releasable modules. This is a top-level filter: if a module matches any of these patterns, it is completely excluded from all release, versioning, and documentation logic. No releases will ever happen for a module that matches this filter. The minimatch syntax is used for pattern matching. Patterns are relative to the workspace directory (no leading slash). NOTE: To match both a directory and its contents, use separate patterns (e.g., "dir,dir/**"). |
module-change-exclude-patterns |
Required | A comma-separated list of file patterns to exclude from triggering version changes in Terraform modules. These patterns follow glob syntax and are relative to each Terraform module directory, not the repository root. Files matching these patterns will be tracked in the module but changes to them won't trigger a new version. Uses matchBase: true for pattern matching, so patterns like "*.md" will match files in any subdirectory.
This option allows you to release a module but control which files inside a matched Terraform module should not force a bump of the module version. For example, you may want to exclude documentation or test files from triggering a release, but still include them in the module asset bundle.
WARNING: Avoid excluding '*.tf' files, as they are essential for module functionality and versioning.
Default: .gitignore,*.md,*.tftest.hcl,tests/** |
module-asset-exclude-patterns |
Required | A comma-separated list of file patterns to exclude when bundling a Terraform module for release assets. These patterns follow glob syntax and are relative to each Terraform module directory, not the repository root. Files matching these patterns will be excluded from the bundled release archives. Uses matchBase: true for pattern matching, similar to module-change-exclude-patterns.
These patterns only affect what's included in release assets and do not impact versioning decisions.
Default: .gitignore,*.md,*.tftest.hcl,tests/** |
use-ssh-source-format |
Required | If enabled, all links to source code in generated Wiki documentation will use SSH format instead of HTTPS format.
Default: false |
github_token |
Required | Required for retrieving pull request metadata, tags, releases, updating PR comments, wiki, and creating tags/releases. Automatically injected for convenience; no need to provide a custom token unless you have specific requirements.
Default: ${{ github.token }} |
tag-directory-separator |
Required | Character used to separate directory path components in Git tags. This separator is used to convert module directory paths into tag names (e.g., 'modules/aws/s3-bucket' becomes 'modules-aws-s3-bucket-v1.0.0' when using '-'). Must be a single character from: /, -, _, or .
Examples with different separators: - "/" (default): modules/aws/s3-bucket/v1.0.0 - "-": modules-aws-s3-bucket-v1.0.0 - "_": modules_aws_s3_bucket_v1.0.0 - ".": modules.aws.s3.bucket.v1.0.0
Default: / |
use-version-prefix |
Required | Whether to include the 'v' prefix on version tags (e.g., v1.2.3 vs 1.2.3). When enabled, all new version tags will include the 'v' prefix. For initial releases, this setting takes precedence over any 'v' prefix specified in the default-first-tag - if use-version-prefix is false and default-first-tag contains 'v', the 'v' will be automatically removed to ensure consistency.
Default: true |
module-ref-mode |
Required | Controls how Terraform module usage examples reference versions in generated documentation. Valid values: "tag" or "sha". When "tag" (default), examples use the tag name in the ref parameter (e.g., ?ref=aws/vpc-endpoint/v1.1.3). When "sha", examples use the commit SHA with the tag as a comment (e.g., ?ref=abc123def456 # aws/vpc-endpoint/v1.1.3). This allows pinning to immutable commit SHAs that cannot be deleted, useful with Renovate for handling module removal scenarios. Note: This only affects generated documentation; tag and release creation remains unchanged.
Default: tag |
| Name | Description |
|---|---|
changed-module-names |
JSON array of module names that were changed in the current pull request |
changed-module-paths |
JSON array of file system paths to the modules that were changed |
changed-modules-map |
JSON object mapping module names to their change details including current tag, next tag, and release type |
all-module-names |
JSON array of all module names found in the repository |
all-module-paths |
JSON array of file system paths to all modules in the repository |
all-modules-map |
JSON object mapping all module names to their details including path, latest tag, and latest tag version |
name: Terraform Module Releaser
description: Automate versioning, releases, and documentation for Terraform modules in GitHub monorepos.
author: TechPivot
branding:
icon: package
color: purple
inputs:
major-keywords:
description: Keywords in commit messages that indicate a major release.
required: true
default: major change,breaking change
minor-keywords:
description: Keywords in commit messages that indicate a minor release.
required: true
default: feat,feature
patch-keywords:
description: >
Keywords in commit messages that indicate a patch release. By default, everything will be a patch
release if major or minor keywords are not found.
required: true
default: fix,chore,docs
default-first-tag:
description: Specifies the default tag version. (Should be in format v#.#.#)
required: true
default: v1.0.0
terraform-docs-version:
description: >
Specifies the terraform-docs version used to generate documentation for the wiki.
See: https://github.com/terraform-docs/terraform-docs/releases
required: true
default: v0.20.0
delete-legacy-tags:
description: >
Specifies a boolean that determines whether tags from Terraform modules that have been deleted
should be automatically removed. By default this is true as the purpose of the repository is to keep
releases/tags clean. When removing a module, this will ensure the tags/releases are automatically
cleaned.
required: true
default: "true"
disable-wiki:
description: >
Whether to disable wiki generation for Terraform modules.
By default, this is set to false. Set to true to prevent wiki documentation from being generated.
required: true
default: "false"
wiki-sidebar-changelog-max:
description: >
An integer that specifies how many changelog entries are displayed in the sidebar per module.
Adjust this value to control the visibility of changelog entries in the module sidebar.
required: true
default: "5"
wiki-usage-template:
description: A raw, multi-line string to override the default 'Usage' section in the generated wiki. If not provided, a default usage block will be generated.
required: false
default: |
To use this module in your Terraform, refer to the below module example:
```hcl
module "{{module_name_terraform}}" {
source = "{{module_source}}?ref={{ref}}"{{ref_comment}}
# See inputs below for additional required parameters
}
```
disable-branding:
description: >
Flag to control whether the small branding link should be disabled or not in the
pull request (PR) comments. When branding is enabled, a link to the action's
repository is added at the bottom of comments. Setting this flag to "true"
will remove that link. Useful for cleaner PR comments in enterprise environments
or where third-party branding is undesirable.
required: true
default: "false"
module-path-ignore:
description: >
A comma-separated list of module paths to completely ignore during processing. Paths matching these patterns will
not be considered for versioning, releases, or documentation generation. These patterns follow glob syntax and are
relative to the repository root. Use this to exclude example modules, test modules, or other paths that should not
be treated as releasable modules.
This is a top-level filter: if a module matches any of these patterns, it is completely excluded from all release, versioning, and documentation logic. No releases will ever happen for a module that matches this filter.
The minimatch syntax is used for pattern matching. Patterns are relative to the workspace directory (no leading
slash).
NOTE: To match both a directory and its contents, use separate patterns (e.g., "dir,dir/**").
required: false
default: ""
module-change-exclude-patterns:
description: >
A comma-separated list of file patterns to exclude from triggering version changes in Terraform modules.
These patterns follow glob syntax and are relative to each Terraform module directory, not the repository root.
Files matching these patterns will be tracked in the module but changes to them won't trigger a new version.
Uses matchBase: true for pattern matching, so patterns like "*.md" will match files in any subdirectory.
This option allows you to release a module but control which files inside a matched Terraform module should not force a bump of the module version. For example, you may want to exclude documentation or test files from triggering a release, but still include them in the module asset bundle.
WARNING: Avoid excluding '*.tf' files, as they are essential for module functionality and versioning.
required: true
default: ".gitignore,*.md,*.tftest.hcl,tests/**"
module-asset-exclude-patterns:
description: >
A comma-separated list of file patterns to exclude when bundling a Terraform module for release assets.
These patterns follow glob syntax and are relative to each Terraform module directory, not the repository root.
Files matching these patterns will be excluded from the bundled release archives. Uses matchBase: true for
pattern matching, similar to module-change-exclude-patterns.
These patterns only affect what's included in release assets and do not impact versioning decisions.
required: true
default: ".gitignore,*.md,*.tftest.hcl,tests/**"
use-ssh-source-format:
description: >
If enabled, all links to source code in generated Wiki documentation will use SSH format instead of HTTPS format.
required: true
default: "false"
github_token:
description: >
Required for retrieving pull request metadata, tags, releases, updating PR comments, wiki, and creating
tags/releases. Automatically injected for convenience; no need to provide a custom token unless you have
specific requirements.
required: true
default: ${{ github.token }}
tag-directory-separator:
description: >
Character used to separate directory path components in Git tags. This separator is used to convert
module directory paths into tag names (e.g., 'modules/aws/s3-bucket' becomes 'modules-aws-s3-bucket-v1.0.0'
when using '-'). Must be a single character from: /, -, _, or .
Examples with different separators:
- "/" (default): modules/aws/s3-bucket/v1.0.0
- "-": modules-aws-s3-bucket-v1.0.0
- "_": modules_aws_s3_bucket_v1.0.0
- ".": modules.aws.s3.bucket.v1.0.0
required: true
default: /
use-version-prefix:
description: >
Whether to include the 'v' prefix on version tags (e.g., v1.2.3 vs 1.2.3). When enabled, all new version
tags will include the 'v' prefix. For initial releases, this setting takes precedence over any 'v' prefix
specified in the default-first-tag - if use-version-prefix is false and default-first-tag contains 'v',
the 'v' will be automatically removed to ensure consistency.
required: true
default: "true"
module-ref-mode:
description: >
Controls how Terraform module usage examples reference versions in generated documentation.
Valid values: "tag" or "sha". When "tag" (default), examples use the tag name in the ref parameter
(e.g., ?ref=aws/vpc-endpoint/v1.1.3). When "sha", examples use the commit SHA with the tag as a
comment (e.g., ?ref=abc123def456 # aws/vpc-endpoint/v1.1.3). This allows pinning to immutable
commit SHAs that cannot be deleted, useful with Renovate for handling module removal scenarios.
Note: This only affects generated documentation; tag and release creation remains unchanged.
required: true
default: "tag"
outputs:
changed-module-names:
description: JSON array of module names that were changed in the current pull request
changed-module-paths:
description: JSON array of file system paths to the modules that were changed
changed-modules-map:
description: JSON object mapping module names to their change details including current tag, next tag, and release type
all-module-names:
description: JSON array of all module names found in the repository
all-module-paths:
description: JSON array of file system paths to all modules in the repository
all-modules-map:
description: JSON object mapping all module names to their details including path, latest tag, and latest tag version
runs:
using: node20
main: dist/index.js
Action ID: marketplace/github/cleanowners
Author: github
Publisher: github
Repository: github.com/github/cleanowners
A GitHub Action to suggest removal of non-organization members from CODEOWNERS files.
---
name: "Cleanowners action"
author: "github"
description: "A GitHub Action to suggest removal of non-organization members from CODEOWNERS files."
runs:
using: "docker"
image: "docker://ghcr.io/github/cleanowners:v1"
branding:
icon: "bell"
color: "orange"
Action ID: marketplace/jidicula/test-go-action
Author: jidicula
Publisher: jidicula
Repository: github.com/jidicula/test-go-action
Simple action to send a tweet via an GitHub Action.
| Name | Required | Description |
|---|---|---|
message |
Required | message you want to tweet |
apiKey |
Required | api key for Twitter api |
apiKeySecret |
Required | api key secret for Twitter api |
accessToken |
Required | access token for Twitter api |
accessTokenSecret |
Required | access token secret for Twitter api |
| Name | Description |
|---|---|
errorMessage |
if something went wrong, the error message |
sentMessage |
message sent to Twitter |
name: Tweeter Action
author: jidicula
description: Simple action to send a tweet via an GitHub Action.
inputs:
message:
description: 'message you want to tweet'
required: true
apiKey:
description: 'api key for Twitter api'
required: true
apiKeySecret:
description: 'api key secret for Twitter api'
required: true
accessToken:
description: 'access token for Twitter api'
required: true
accessTokenSecret:
description: 'access token secret for Twitter api'
required: true
outputs:
errorMessage:
description: 'if something went wrong, the error message'
sentMessage:
description: 'message sent to Twitter'
runs:
using: docker
image: Dockerfile
# using: docker
# image: docker://ghcr.io/the-gophers/go-action:1.0.0
args:
- --message
- "${{ inputs.message }}"
- --apiKey
- ${{ inputs.apiKey }}
- --apiKeySecret
- ${{ inputs.apiKeySecret }}
- --accessToken
- ${{ inputs.accessToken }}
- --accessTokenSecret
- ${{ inputs.accessTokenSecret }}
Action ID: marketplace/aws-actions/aws-lambda-deploy
Author: Unknown
Publisher: aws-actions
Repository: github.com/aws-actions/aws-lambda-deploy
Updates the code and configuration of AWS Lambda functions. If the function does not exist, a new one will be created.
| Name | Required | Description |
|---|---|---|
function-name |
Required | Name of the Lambda function. |
package-type |
Optional | The package type of the Lambda function. Set to "Image" for container image functions, defaults to "Zip" for .zip file functions. Default: Zip |
image-uri |
Optional | The URI of the container image in Amazon ECR. Required when package-type is "Image". |
code-artifacts-dir |
Optional | The path to a directory of code artifacts to zip and deploy to Lambda. Required when package-type is "Zip". |
handler |
Optional | The name of the method within your code that Lambda calls to run your function. Required when package-type is "Zip". Default: index.handler |
runtime |
Optional | The identifier of the runtime. Required when package-type is "Zip". Default: nodejs20.x |
s3-bucket |
Optional | S3 bucket name to use for Lambda deployment package. If provided, S3 deployment method will be used instead of direct upload. |
s3-key |
Optional | S3 key for the Lambda deployment package in the bucket. If not provided, a key will be auto-generated using the format: lambda-deployments/{function-name}/{timestamp}-{commit-hash}.zip. |
publish |
Optional | Set to true to publish a new version of the function after updating the code. |
dry-run |
Optional | Set true to validate the request parameters and access permissions without modifying the function code. Applicable for updating function code only. Creating and updating function configuration will be skipped since they do not support dry run. Default: false |
revision-id |
Optional | Update the function only if the revision ID matches the ID that is specified. |
architectures |
Optional | The instruction set architecture that the function supports. |
source-kms-key-arn |
Optional | The ARN of the Key Management Service (KMS) customer managed key that is used to encrypt your functions .zip deployment package. |
role |
Optional | The Amazon Resource Name (ARN) of the functions execution role. Required when creating a new function. |
function-description |
Optional | A description of the function. |
memory-size |
Optional | The amount of memory available to the function at runtime. |
timeout |
Optional | The amount of time (in seconds) that Lambda allows a function to run before stopping it. |
vpc-config |
Optional | For network connectivity to Amazon Web Services resources in a VPC, specify a list of security groups and subnets in the VPC. |
environment |
Optional | Environment variables as a JSON string |
dead-letter-config |
Optional | Specifies the queue or topic where Lambda sends asynchronous events when they fail processing. |
kms-key-arn |
Optional | The ARN of the Key Management Service (KMS) customer managed key |
tracing-config |
Optional | Set Mode to Active to sample and trace a subset of incoming requests with X-Ray. |
layers |
Optional | A list of function layers to add to the functions execution environment. |
file-system-configs |
Optional | Connection settings for an Amazon EFS file system. |
image-config |
Optional | Configuration for the Lambda functions container image. |
ephemeral-storage |
Optional | The size of the functions /tmp directory in MB. The default value is 512, but can be any whole number between 512 and 10,240 MB. |
snap-start |
Optional | The functions SnapStart setting. |
logging-config |
Optional | The Amazon CloudWatch Logs configuration settings for the function. |
code-signing-config-arn |
Optional | The ARN of a code-signing configuration to use on this function. |
tags |
Optional | Tags to apply to the function as a JSON string (e.g. {"Environment":"Production","Team":"DevOps"}) |
| Name | Description |
|---|---|
function-arn |
The ARN of the updated Lambda function. |
version |
The function version if a new version was published. |
name: AWS Lambda Deploy Action
description: Updates the code and configuration of AWS Lambda functions. If the function does not exist, a new one will be created.
inputs:
function-name:
description: 'Name of the Lambda function.'
required: true
package-type:
description: 'The package type of the Lambda function. Set to "Image" for container image functions, defaults to "Zip" for .zip file functions.'
required: false
default: 'Zip'
image-uri:
description: 'The URI of the container image in Amazon ECR. Required when package-type is "Image".'
required: false
code-artifacts-dir:
description: 'The path to a directory of code artifacts to zip and deploy to Lambda. Required when package-type is "Zip".'
required: false
handler:
description: 'The name of the method within your code that Lambda calls to run your function. Required when package-type is "Zip".'
required: false
default: 'index.handler'
runtime:
description: 'The identifier of the runtime. Required when package-type is "Zip".'
required: false
default: 'nodejs20.x'
s3-bucket:
description: 'S3 bucket name to use for Lambda deployment package. If provided, S3 deployment method will be used instead of direct upload.'
required: false
s3-key:
description: 'S3 key for the Lambda deployment package in the bucket. If not provided, a key will be auto-generated using the format: lambda-deployments/{function-name}/{timestamp}-{commit-hash}.zip.'
required: false
publish:
description: 'Set to true to publish a new version of the function after updating the code.'
required: false
dry-run:
description: 'Set true to validate the request parameters and access permissions without modifying the function code. Applicable for updating function code only. Creating and updating function configuration will be skipped since they do not support dry run.'
required: false
default: 'false'
revision-id:
description: 'Update the function only if the revision ID matches the ID that is specified.'
required: false
architectures:
description: 'The instruction set architecture that the function supports.'
required: false
source-kms-key-arn:
description: 'The ARN of the Key Management Service (KMS) customer managed key that is used to encrypt your functions .zip deployment package.'
required: false
role:
description: 'The Amazon Resource Name (ARN) of the functions execution role. Required when creating a new function.'
required: false
function-description:
description: 'A description of the function.'
required: false
memory-size:
description: 'The amount of memory available to the function at runtime.'
required: false
timeout:
description: 'The amount of time (in seconds) that Lambda allows a function to run before stopping it.'
required: false
vpc-config:
description: 'For network connectivity to Amazon Web Services resources in a VPC, specify a list of security groups and subnets in the VPC.'
required: false
environment:
description: 'Environment variables as a JSON string'
required: false
dead-letter-config:
description: 'Specifies the queue or topic where Lambda sends asynchronous events when they fail processing.'
required: false
kms-key-arn:
description: 'The ARN of the Key Management Service (KMS) customer managed key'
required: false
tracing-config:
description: 'Set Mode to Active to sample and trace a subset of incoming requests with X-Ray.'
required: false
layers:
description: 'A list of function layers to add to the functions execution environment.'
required: false
file-system-configs:
description: 'Connection settings for an Amazon EFS file system.'
required: false
image-config:
description: 'Configuration for the Lambda functions container image.'
required: false
ephemeral-storage:
description: 'The size of the functions /tmp directory in MB. The default value is 512, but can be any whole number between 512 and 10,240 MB.'
required: false
snap-start:
description: 'The functions SnapStart setting.'
required: false
logging-config:
description: 'The Amazon CloudWatch Logs configuration settings for the function.'
required: false
code-signing-config-arn:
description: 'The ARN of a code-signing configuration to use on this function.'
required: false
tags:
description: 'Tags to apply to the function as a JSON string (e.g. {"Environment":"Production","Team":"DevOps"})'
required: false
outputs:
function-arn:
description: 'The ARN of the updated Lambda function.'
version:
description: 'The function version if a new version was published.'
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/julia-actions/julia-processcoverage
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-processcoverage
Convert Julia coverage files into a lcov file
| Name | Required | Description |
|---|---|---|
directories |
Optional | Comma-separated list of directories to look for coverage information (e.g. `src,examples`) Default: src,ext |
name: 'Process Julia coverage files'
description: 'Convert Julia coverage files into a lcov file'
author: 'David Anthoff'
branding:
icon: 'settings'
color: 'gray-dark'
inputs:
directories:
description: 'Comma-separated list of directories to look for coverage information (e.g. `src,examples`)'
required: false
default: 'src,ext'
runs:
using: 'composite'
steps:
- run: julia --color=yes "$GITHUB_ACTION_PATH"/main.jl
shell: bash
env:
INPUT_DIRECTORIES: ${{ inputs.directories }}
Action ID: marketplace/TryGhost/action-deploy-theme
Author: Unknown
Publisher: TryGhost
Repository: github.com/TryGhost/action-deploy-theme
Build & deploy a theme to your Ghost site
| Name | Required | Description |
|---|---|---|
api-url |
Required | Ghost Admin API Url |
api-key |
Required | Ghost Admin API Key |
exclude |
Optional | Files or folders to exclude (space-separated list) |
theme-name |
Optional | A custom theme name that overrides the default name in package.json |
file |
Optional | Path to a built zip file. If this is included, the `exclude` and `theme-name` options are ignored |
working-directory |
Optional | A custom directory to zip when a theme is in a subdirectory |
name: 'Deploy Ghost Theme'
description: 'Build & deploy a theme to your Ghost site'
branding:
icon: 'cloud-lightning'
color: 'gray-dark'
inputs:
api-url:
description: 'Ghost Admin API Url'
required: true
api-key:
description: 'Ghost Admin API Key'
required: true
exclude:
description: 'Files or folders to exclude (space-separated list)'
required: false
theme-name:
description: 'A custom theme name that overrides the default name in package.json'
required: false
file:
description: 'Path to a built zip file. If this is included, the `exclude` and `theme-name` options are ignored'
required: false
working-directory:
description: 'A custom directory to zip when a theme is in a subdirectory'
required: false
runs:
using: 'node20'
main: 'dist/index.js'
Action ID: marketplace/jidicula/functions-action
Author: Unknown
Publisher: jidicula
Repository: github.com/jidicula/functions-action
Deploy Function App to Azure Functions
| Name | Required | Description |
|---|---|---|
app-name |
Required | Name of the Azure Function App |
package |
Optional | Path to package or folder. *.zip or a folder to deploy Default: . |
slot-name |
Optional | Function app slot to be deploy to |
publish-profile |
Optional | Publish profile (*.publishsettings) file contents with web deploy secrets |
respect-pom-xml |
Optional | Automatically look up Java function app artifact from pom.xml (default: 'false'). When this is set to 'true', 'package' should point to the folder of host.json. Default: false |
respect-funcignore |
Optional | Remove unwanted files defined in .funcignore file (default: 'false'). When this is set to 'true', 'package' should point to the folder of host.json. Default: false |
scm-do-build-during-deployment |
Optional | Enable build action from Kudu when the package is deployed onto the function app. This will temporarily change the SCM_DO_BUILD_DURING_DEPLOYMENT setting for this deployment. To bypass this and use the existing settings from your function app, please set this to an empty string ''. To enable remote build for your project, please set this and 'enable-oryx-build' both to 'true'. By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the redundant build action from Kudu endpoint. (default: 'false'). Default: false |
enable-oryx-build |
Optional | Use Oryx Build from Kudu when the package is deployed onto the function app. (Linux functions only). This will temporarily change the ENABLE_ORYX_BUILD setting from this deployment. To bypass this and use the existing settings from your function app, please set this to an empty string ''. To enable remote build for your project, please set this and 'scm-do-build-during-deployment' both to 'true'. By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the redundant build action from Kudu endpoint. (default: 'false'). Default: false |
sku |
Optional | For function app on Flex Consumption plan, set this to 'flexconsumption'. You can skip this parameter for function app on other plans. If using RBAC credentials, then by default, GitHub Action will resolve the value for this paramter. But if using 'publish-profile', then you must set this for function app on Flex Consumption plan. |
remote-build |
Optional | For function app on Flex Consumption plan, enable build action from Kudu when the package is deployed onto the function app by setting this to 'true'. For function app on Flex Consumption plan, do not set 'scm-do-build-during-deployment' and 'enable-oryx-build'. By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the redundant build action from Kudu endpoint. (default: 'false'). Default: false |
| Name | Description |
|---|---|
app-url |
URL to work with your function app |
package-url |
URL to the package zip file if using package deployment |
# Azure Functions GitHub Action
name: 'Azure Functions Action'
description: 'Deploy Function App to Azure Functions'
inputs:
app-name:
description: 'Name of the Azure Function App'
required: true
package:
description: 'Path to package or folder. *.zip or a folder to deploy'
required: false
default: '.'
slot-name:
description: 'Function app slot to be deploy to'
required: false
publish-profile:
description: 'Publish profile (*.publishsettings) file contents with web deploy secrets'
required: false
respect-pom-xml:
description: "Automatically look up Java function app artifact from pom.xml (default: 'false').
When this is set to 'true', 'package' should point to the folder of host.json."
required: false
default: 'false'
respect-funcignore:
description: "Remove unwanted files defined in .funcignore file (default: 'false').
When this is set to 'true', 'package' should point to the folder of host.json."
required: false
default: 'false'
scm-do-build-during-deployment:
description: "Enable build action from Kudu when the package is deployed onto the function app.
This will temporarily change the SCM_DO_BUILD_DURING_DEPLOYMENT setting for this deployment.
To bypass this and use the existing settings from your function app, please set this to an empty
string ''.
To enable remote build for your project, please set this and 'enable-oryx-build' both to 'true'.
By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the
redundant build action from Kudu endpoint. (default: 'false')."
required: false
default: 'false'
enable-oryx-build:
description: "Use Oryx Build from Kudu when the package is deployed onto the function app. (Linux functions only).
This will temporarily change the ENABLE_ORYX_BUILD setting from this deployment.
To bypass this and use the existing settings from your function app, please set this to an empty
string ''.
To enable remote build for your project, please set this and 'scm-do-build-during-deployment' both
to 'true'.
By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the
redundant build action from Kudu endpoint. (default: 'false')."
required: false
default: 'false'
sku:
description: "For function app on Flex Consumption plan, set this to 'flexconsumption'. You can skip this parameter for function app on other plans.
If using RBAC credentials, then by default, GitHub Action will resolve the value for this paramter. But if using 'publish-profile',
then you must set this for function app on Flex Consumption plan."
required: false
remote-build:
description: "For function app on Flex Consumption plan, enable build action from Kudu when the package is deployed onto the function app by setting this to 'true'.
For function app on Flex Consumption plan, do not set 'scm-do-build-during-deployment' and 'enable-oryx-build'.
By default, GitHub Action respects the packages resolved in GitHub workflow, disabling the
redundant build action from Kudu endpoint. (default: 'false')."
required: false
default: 'false'
outputs:
app-url:
description: 'URL to work with your function app'
package-url:
description: 'URL to the package zip file if using package deployment'
branding:
icon: 'functionapp.svg'
color: 'blue'
runs:
using: 'node20'
main: 'lib/main.js'
Action ID: marketplace/actions/delete-package-versions
Author: Unknown
Publisher: actions
Repository: github.com/actions/delete-package-versions
Deletes package versions
| Name | Required | Description |
|---|---|---|
package-version-ids |
Optional | Comma separated string of package version ids to delete. |
owner |
Optional | Owner of the repo containing the package version to delete. Defaults to the owner of the repo running the action. |
package-name |
Required | Name of the package containing the version to delete. |
package-type |
Required | Type of package. Can be one of container, maven, npm, nuget, or rubygems. |
num-old-versions-to-delete |
Optional | Number of versions to delete starting with the oldest version. Defaults to 1.
Default: 1 |
min-versions-to-keep |
Optional | Number of versions to keep starting with the latest version By default keeps no version. To delete all versions set this as 0.
Default: -1 |
ignore-versions |
Optional | Regex pattern for package versions to ignore. Defaults to delete all versions.
Default: ^$ |
delete-only-pre-release-versions |
Optional | Deletes only pre-release versions. The number of pre-release versions to keep can be specified by min-versions-to-keep. When this is set num-old-versions-to-delete and ignore-versions will not be taken into account. By default this is set to false
Default: false |
delete-only-untagged-versions |
Optional | Deletes only untagged versions in case of a container package. Does not work for other package types. The number of untagged versions to keep can be specified by min-versions-to-keep. When this is set num-old-versions-to-delete will not be taken into account. By default this is set to false
Default: false |
token |
Optional | Token with the necessary scopes to delete package versions. If num-old-versions-to-delete is used the token also needs the read packages scope. Defaults to github.token scoped to the repo running the action. To delete package versions of a package outside the repo the action is running in use a Personal Access Token stored as a secret.
Default: ${{ github.token }} |
name: Delete Package Versions
description: Deletes package versions
inputs:
package-version-ids:
description: Comma separated string of package version ids to delete.
required: false
owner:
description: >
Owner of the repo containing the package version to delete.
Defaults to the owner of the repo running the action.
required: false
package-name:
description: >
Name of the package containing the version to delete.
required: true
package-type:
description: >
Type of package. Can be one of container, maven, npm, nuget, or rubygems.
required: true
num-old-versions-to-delete:
description: >
Number of versions to delete starting with the oldest version.
Defaults to 1.
required: false
default: "1"
min-versions-to-keep:
description: >
Number of versions to keep starting with the latest version
By default keeps no version.
To delete all versions set this as 0.
required: false
default: "-1"
ignore-versions:
description: >
Regex pattern for package versions to ignore.
Defaults to delete all versions.
required: false
default: "^$"
delete-only-pre-release-versions:
description: >
Deletes only pre-release versions. The number of pre-release versions to keep can be specified by min-versions-to-keep.
When this is set num-old-versions-to-delete and ignore-versions will not be taken into account.
By default this is set to false
required: false
default: "false"
delete-only-untagged-versions:
description: >
Deletes only untagged versions in case of a container package. Does not work for other package types.
The number of untagged versions to keep can be specified by min-versions-to-keep.
When this is set num-old-versions-to-delete will not be taken into account.
By default this is set to false
required: false
default: "false"
token:
description: >
Token with the necessary scopes to delete package versions.
If num-old-versions-to-delete is used the token also needs the read packages scope.
Defaults to github.token scoped to the repo running the action. To delete package versions
of a package outside the repo the action is running in use a Personal Access Token stored as a secret.
required: false
default: ${{ github.token }}
runs:
using: node20
main: dist/index.js
branding:
icon: package
color: blue
Action ID: marketplace/google-github-actions/run-vertexai-notebook
Author: Unknown
Publisher: google-github-actions
Repository: github.com/google-github-actions/run-vertexai-notebook
Execute notebooks and create links to their output files
| Name | Required | Description |
|---|---|---|
gcs_source_bucket |
Required | Google Cloud Storage bucket to store notebooks to be run by Vertex AI. e.g. <project-id>/nbr/source |
gcs_output_bucket |
Required | Google Cloud Storage bucket to store the results of the notebooks executed by Vertex AI. e.g. <project-id>/nbr/output |
allowlist |
Required | Comma separated list of files to run on Vertex AI. e.g. mynotebook.ipynb, somedir/**.pynb. It is expected that this is the output from an action like ```dorny/paths-filter``` |
vertex_machine_type |
Optional | Type of Vertex AI machine to run notebooks on e.g. n1-standard-4 Default: n1-standard-4 |
region |
Optional | Google Cloud region e.g. us-central1, us-east4 Default: us-central1 |
add_comment |
Optional | Add a comment to an open PR as the final step - defaults to "true" Default: true |
kernel_name |
Optional | Notebook kernel to use for the execution environment - defaults to python3 Default: python3 |
vertex_container_name |
Optional | The base container image to use. Defaults to the basic Python container. Default: gcr.io/deeplearning-platform-release/base-cu110:latest |
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Vertex AI Notebook Review Action'
description: 'Execute notebooks and create links to their output files'
inputs:
gcs_source_bucket:
description: 'Google Cloud Storage bucket to store notebooks to be run by Vertex AI. e.g. <project-id>/nbr/source'
required: true
gcs_output_bucket:
description: 'Google Cloud Storage bucket to store the results of the notebooks executed by Vertex AI. e.g. <project-id>/nbr/output'
required: true
allowlist:
description: 'Comma separated list of files to run on Vertex AI. e.g. mynotebook.ipynb, somedir/**.pynb. It is expected that this is the output from an action like ```dorny/paths-filter```'
required: true
vertex_machine_type:
description: 'Type of Vertex AI machine to run notebooks on e.g. n1-standard-4'
default: 'n1-standard-4'
required: false
region:
description: 'Google Cloud region e.g. us-central1, us-east4'
default: 'us-central1'
required: false
add_comment:
description: 'Add a comment to an open PR as the final step - defaults to "true"'
default: 'true'
required: false
kernel_name:
description: 'Notebook kernel to use for the execution environment - defaults to python3'
default: 'python3'
required: false
vertex_container_name:
description: 'The base container image to use. Defaults to the basic Python container.'
default: 'gcr.io/deeplearning-platform-release/base-cu110:latest'
required: false
runs:
using: 'composite'
steps:
# Move the files that are to be executed into a directory and rename them
- name: 'stage-files'
shell: 'bash'
env:
allowlist: '${{ inputs.allowlist }}'
dir: './${{ github.sha }}'
run: |-
set -x;
mkdir -p ${dir};
for file in ${allowlist};
do
f2=$(echo ${file}|tr '/' '_');
cp ${file} ${dir}/${f2};
done;
echo "notebooks=$(ls ${dir} | xargs)" >> $GITHUB_OUTPUT
# Setup gcloud CLI
- name: 'setup-cloud-sdk'
uses: 'google-github-actions/setup-gcloud@v2' # ratchet:exclude
- name: 'upload-folder'
uses: 'google-github-actions/upload-cloud-storage@v2' # ratchet:exclude
with:
path: './${{ github.sha }}'
destination: '${{ inputs.gcs_source_bucket }}'
gzip: false
headers: |-
content-type: application/octet-stream
- name: 'vertex-execution'
shell: 'bash'
env:
notebooks: '${{ inputs.allowlist }}'
commit_sha: '${{ github.sha }}'
output_location: 'gs://${{ inputs.gcs_output_bucket }}'
source_location: 'gs://${{ inputs.gcs_source_bucket }}'
machine_type: '${{ inputs.vertex_machine_type }}'
region: '${{ inputs.region }}'
kernel: '${{ inputs.kernel_name }}'
container: '${{ inputs.vertex_container_name }}'
run: |-
set -x;
echo '{"jobs": []}' > jobs.json
for file in ${notebooks};
do
file=$(echo ${file}|tr '/' '_');
job_name="${commit_sha}:${file}";
source_file="${source_location}/${commit_sha}/${file}";
output_file="${output_location}/${commit_sha}/${file}";
output=$(gcloud ai custom-jobs create \
--format=json \
--region=${region} \
--display-name="${job_name}" \
--labels=commit_sha=${commit_sha} \
--worker-pool-spec=machine-type="${machine_type}",replica-count="1",container-image-uri="${container}" \
--args=nbexecutor,--input-notebook="${source_file}",--output-notebook="${output_file}",--kernel-name="${kernel}");
echo $output | jq -c > training.json
jq '.jobs[.jobs | length] |= . + '$(cat training.json) jobs.json > jobs_new.json
cat jobs_new.json | jq -c > jobs.json
done;
echo "name=training_jobs=$(cat jobs.json)" >> $GITHUB_OUTPUT
- name: 'add-comment'
env:
vertex_job_uri: 'https://console.cloud.google.com/vertex-ai/locations'
vertex_notebook_uri: 'https://notebooks.cloud.google.com/view'
region: '${{ inputs.region }}'
add_comment: '${{ inputs.add_comment }}'
uses: 'actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea' # ratchet:actions/github-script@v7.0.1
with:
script: |
try {
if (process.env.add_comment.toLowerCase() !== 'true') {
return;
}
const fs = require('fs');
const region = process.env.region;
const notebookUri = process.env.vertex_notebook_uri;
const vertexUri = process.env.vertex_job_uri;
const project = process.env.GCLOUD_PROJECT;
const jsonStr = fs.readFileSync('jobs.json');
const data = JSON.parse(jsonStr);
const jid_re = /\/([0-9]+)$/;
for (const ix in data.jobs) {
const job = data.jobs[ix];
const nbName = job.displayName.split(":")[1];
const jobId = job.name.match(jid_re)[0].replace("/", "");
const outFile = job.jobSpec.workerPoolSpecs[0].containerSpec.args.filter((a) => a.startsWith("--output-notebook=gs://")).map((a) => a.replace("--output-notebook=gs://", ""))[0];
const jobUrl = encodeURI(`${vertexUri}/${region}/training/${jobId}?project=${project}`);
const nbUrl = encodeURI(`${notebookUri}/${outFile}`);
const message = `Automatic running of notebook **${nbName}** underway.
You can review the status of the job within Vertex AI: [Job ${jobId}](${jobUrl})
Once complete the notebook with output cells will be available to view [${nbName}](${nbUrl})`;
await github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: message,
});
}
} catch(err) {
core.setFailed(`Failed to generate add comment with Vertex AI links: ${err}`);
}
Action ID: marketplace/vn7n24fzkq/github-profile-summary-cards
Author: vn7n24fzkq
Publisher: vn7n24fzkq
Repository: github.com/vn7n24fzkq/github-profile-summary-cards
Generate profile summary cards and commit to default branch
| Name | Required | Description |
|---|---|---|
USERNAME |
Required | GitHub username Default: ${{ github.repository_owner }} |
BRANCH_NAME |
Optional | The branch to push cards Default: main |
UTC_OFFSET |
Optional | The UTC offset used in the Productive Time Card.(e.g., 8, -3) |
EXCLUDE |
Optional | A comma separated list of languages to exclude |
AUTO_PUSH |
Optional | Whether automatically push generated files to desired branch Default: True |
name: GitHub-Profile-Summary-Cards
description: Generate profile summary cards and commit to default branch
author: vn7n24fzkq
inputs:
USERNAME:
type: string
required: true
description: 'GitHub username'
default: ${{ github.repository_owner }}
BRANCH_NAME:
type: string
required: false
description: 'The branch to push cards'
default: 'main'
UTC_OFFSET:
type: string
required: false
description: 'The UTC offset used in the Productive Time Card.(e.g., 8, -3)'
default: 0
EXCLUDE:
type: string
required: false
description: 'A comma separated list of languages to exclude'
default: ''
AUTO_PUSH:
type: boolean
required: false
description: 'Whether automatically push generated files to desired branch'
default: true
runs:
using: 'node20'
main: 'dist/index.js'
branding:
icon: 'activity'
color: 'orange'
Action ID: marketplace/lowlighter/metrics
Author: lowlighter
Publisher: lowlighter
Repository: github.com/lowlighter/metrics
An infographics generator with 40+ plugins and 300+ options to display stats about your GitHub account!
| Name | Required | Description |
|---|---|---|
base |
Optional | Base content Default: <default-value> |
base_indepth |
Optional | Indepth mode Default: <default-value> |
base_hireable |
Optional | Show `Available for hire!` in header section Default: <default-value> |
base_skip |
Optional | Skip base content Default: <default-value> |
repositories |
Optional | Fetched repositories Default: <default-value> |
repositories_batch |
Optional | Fetched repositories per query Default: <default-value> |
repositories_forks |
Optional | Include forks Default: <default-value> |
repositories_affiliations |
Optional | Repositories affiliations Default: <default-value> |
repositories_skipped |
Optional | Default skipped repositories Default: <default-value> |
users_ignored |
Optional | Default ignored users Default: <default-value> |
commits_authoring |
Optional | Identifiers that has been used for authoring commits Default: <default-value> |
token |
Required | GitHub Personal Access Token |
user |
Optional | GitHub username Default: <default-value> |
repo |
Optional | GitHub repository Default: <default-value> |
committer_token |
Optional | GitHub Token used to commit metrics Default: ${{ github.token }} |
committer_branch |
Optional | Target branch Default: <default-value> |
committer_message |
Optional | Commit message Default: <default-value> |
committer_gist |
Optional | Gist id Default: <default-value> |
filename |
Optional | Output path Default: <default-value> |
markdown |
Optional | Markdown template path Default: <default-value> |
markdown_cache |
Optional | Markdown file cache Default: <default-value> |
output_action |
Optional | Output action Default: <default-value> |
output_condition |
Optional | Output condition Default: <default-value> |
optimize |
Optional | Optimization features Default: <default-value> |
setup_community_templates |
Optional | Community templates to setup Default: <default-value> |
template |
Optional | Template Default: <default-value> |
query |
Optional | Query parameters Default: <default-value> |
extras_css |
Optional | Extra CSS Default: <default-value> |
extras_js |
Optional | Extra JavaScript Default: <default-value> |
github_api_rest |
Optional | GitHub REST API endpoint Default: <default-value> |
github_api_graphql |
Optional | GitHub GraphQL API endpoint Default: <default-value> |
config_timezone |
Optional | Timezone for dates |
config_order |
Optional | Plugin order Default: <default-value> |
config_twemoji |
Optional | Use twemojis Default: <default-value> |
config_gemoji |
Optional | Use GitHub custom emojis Default: <default-value> |
config_octicon |
Optional | Use GitHub octicons Default: <default-value> |
config_display |
Optional | Display width (for image output formats) Default: <default-value> |
config_animations |
Optional | Use CSS animations Default: <default-value> |
config_base64 |
Optional | Base64-encoded images Default: <default-value> |
config_padding |
Optional | Output padding Default: <default-value> |
config_output |
Optional | Output format Default: <default-value> |
config_presets |
Optional | Configuration presets |
retries |
Optional | Retries in case of failures (for rendering) Default: <default-value> |
retries_delay |
Optional | Delay between each retry (in seconds, for rendering) Default: <default-value> |
retries_output_action |
Optional | Retries in case of failures (for output action) Default: <default-value> |
retries_delay_output_action |
Optional | Delay between each retry (in seconds, for output action) Default: <default-value> |
clean_workflows |
Optional | Clean previous workflows jobs Default: <default-value> |
delay |
Optional | Job delay Default: <default-value> |
quota_required_rest |
Optional | Minimum GitHub REST API requests quota required to run Default: <default-value> |
quota_required_graphql |
Optional | Minimum GitHub GraphQL API requests quota required to run Default: <default-value> |
quota_required_search |
Optional | Minimum GitHub Search API requests quota required to run Default: <default-value> |
notice_releases |
Optional | Notice about new releases of metrics Default: <default-value> |
use_prebuilt_image |
Optional | Use pre-built docker image from [GitHub container registry](https://github.com/lowlighter/metrics/pkgs/container/metrics) Default: True |
plugins_errors_fatal |
Optional | Fatal plugin errors Default: <default-value> |
debug |
Optional | Debug mode Default: <default-value> |
verify |
Optional | SVG validity check Default: <default-value> |
debug_flags |
Optional | Debug flags Default: <default-value> |
debug_print |
Optional | Print output in console Default: <default-value> |
dryrun |
Optional | Dry-run Default: <default-value> |
experimental_features |
Optional | Experimental features Default: <default-value> |
use_mocked_data |
Optional | Use mocked data instead of live APIs Default: <default-value> |
plugin_isocalendar |
Optional | Enable isocalendar plugin Default: <default-value> |
plugin_isocalendar_duration |
Optional | Time range Default: <default-value> |
plugin_languages |
Optional | Enable languages plugin Default: <default-value> |
plugin_languages_ignored |
Optional | Ignored languages Default: <default-value> |
plugin_languages_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_languages_limit |
Optional | Display limit Default: <default-value> |
plugin_languages_threshold |
Optional | Display threshold (percentage) Default: <default-value> |
plugin_languages_other |
Optional | Group unknown, ignored and over-limit languages into "Other" category Default: <default-value> |
plugin_languages_colors |
Optional | Custom languages colors Default: <default-value> |
plugin_languages_aliases |
Optional | Custom languages names Default: <default-value> |
plugin_languages_sections |
Optional | Displayed sections Default: <default-value> |
plugin_languages_details |
Optional | Additional details Default: <default-value> |
plugin_languages_indepth |
Optional | Indepth mode Default: <default-value> |
plugin_languages_indepth_custom |
Optional | Indepth mode - Custom repositories Default: <default-value> |
plugin_languages_analysis_timeout |
Optional | Indepth mode - Analysis timeout Default: <default-value> |
plugin_languages_analysis_timeout_repositories |
Optional | Indepth mode - Analysis timeout (repositories) Default: <default-value> |
plugin_languages_categories |
Optional | Indepth mode - Displayed categories (most-used section) Default: <default-value> |
plugin_languages_recent_categories |
Optional | Indepth mode - Displayed categories (recently-used section) Default: <default-value> |
plugin_languages_recent_load |
Optional | Indepth mode - Events to load (recently-used section) Default: <default-value> |
plugin_languages_recent_days |
Optional | Indepth mode - Events maximum age (day, recently-used section) Default: <default-value> |
plugin_stargazers |
Optional | Enable stargazers plugin Default: <default-value> |
plugin_stargazers_days |
Optional | Time range Default: <default-value> |
plugin_stargazers_charts |
Optional | Charts Default: <default-value> |
plugin_stargazers_charts_type |
Optional | Charts display type Default: <default-value> |
plugin_stargazers_worldmap |
Optional | Stargazers worldmap Default: <default-value> |
plugin_stargazers_worldmap_token |
Optional | Stargazers worldmap token Default: <default-value> |
plugin_stargazers_worldmap_sample |
Optional | Stargazers worldmap sample Default: <default-value> |
plugin_lines |
Optional | Enable lines plugin Default: <default-value> |
plugin_lines_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_lines_sections |
Optional | Displayed sections Default: <default-value> |
plugin_lines_repositories_limit |
Optional | Display limit Default: <default-value> |
plugin_lines_history_limit |
Optional | Years to display Default: <default-value> |
plugin_lines_delay |
Optional | Delay before performing a second query Default: <default-value> |
plugin_topics |
Optional | Enable topics plugin Default: <default-value> |
plugin_topics_mode |
Optional | Display mode Default: <default-value> |
plugin_topics_sort |
Optional | Sorting method Default: <default-value> |
plugin_topics_limit |
Optional | Display limit Default: <default-value> |
plugin_stars |
Optional | Enable stars plugin Default: <default-value> |
plugin_stars_limit |
Optional | Display limit Default: <default-value> |
plugin_licenses |
Optional | Enable licenses plugin Default: <default-value> |
plugin_licenses_setup |
Optional | Setup command Default: <default-value> |
plugin_licenses_ratio |
Optional | Used licenses ratio Default: <default-value> |
plugin_licenses_legal |
Optional | Permissions, limitations and conditions about used licenses Default: <default-value> |
plugin_habits |
Optional | Enable habits plugin Default: <default-value> |
plugin_habits_from |
Optional | Events to use Default: <default-value> |
plugin_habits_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_habits_days |
Optional | Event maximum age Default: <default-value> |
plugin_habits_facts |
Optional | Mildly interesting facts Default: <default-value> |
plugin_habits_charts |
Optional | Charts Default: <default-value> |
plugin_habits_charts_type |
Optional | Charts display type Default: <default-value> |
plugin_habits_trim |
Optional | Trim unused hours on charts Default: <default-value> |
plugin_habits_languages_limit |
Optional | Display limit (languages) Default: <default-value> |
plugin_habits_languages_threshold |
Optional | Display threshold (percentage) Default: <default-value> |
plugin_contributors |
Optional | Enable contributors plugin Default: <default-value> |
plugin_contributors_base |
Optional | Base reference Default: <default-value> |
plugin_contributors_head |
Optional | Head reference Default: <default-value> |
plugin_contributors_ignored |
Optional | Ignored users Default: <default-value> |
plugin_contributors_contributions |
Optional | Contributions count Default: <default-value> |
plugin_contributors_sections |
Optional | Displayed sections Default: <default-value> |
plugin_contributors_categories |
Optional | Contribution categories Default: <default-value> |
plugin_followup |
Optional | Enable followup plugin Default: <default-value> |
plugin_followup_sections |
Optional | Displayed sections Default: <default-value> |
plugin_followup_indepth |
Optional | Indepth analysis Default: <default-value> |
plugin_followup_archived |
Optional | Include archived repositories Default: <default-value> |
plugin_reactions |
Optional | Enable reactions plugin Default: <default-value> |
plugin_reactions_limit |
Optional | Display limit (issues and pull requests comments) Default: <default-value> |
plugin_reactions_limit_issues |
Optional | Display limit (issues and pull requests, first comment) Default: <default-value> |
plugin_reactions_limit_discussions |
Optional | Display limit (discussions, first comment) Default: <default-value> |
plugin_reactions_limit_discussions_comments |
Optional | Display limit (discussions comments) Default: <default-value> |
plugin_reactions_days |
Optional | Comments maximum age Default: <default-value> |
plugin_reactions_display |
Optional | Display mode Default: <default-value> |
plugin_reactions_details |
Optional | Additional details Default: <default-value> |
plugin_reactions_ignored |
Optional | Ignored users Default: <default-value> |
plugin_people |
Optional | Enable people plugin Default: <default-value> |
plugin_people_limit |
Optional | Display limit Default: <default-value> |
plugin_people_identicons |
Optional | Force identicons pictures Default: <default-value> |
plugin_people_identicons_hide |
Optional | Hide identicons pictures Default: <default-value> |
plugin_people_size |
Optional | Profile picture display size Default: <default-value> |
plugin_people_types |
Optional | Displayed sections Default: <default-value> |
plugin_people_thanks |
Optional | Special thanks Default: <default-value> |
plugin_people_sponsors_custom |
Optional | Custom sponsors Default: <default-value> |
plugin_people_shuffle |
Optional | Shuffle data Default: <default-value> |
plugin_sponsorships |
Optional | Enable sponsorships plugin Default: <default-value> |
plugin_sponsorships_sections |
Optional | Displayed sections Default: <default-value> |
plugin_sponsorships_size |
Optional | Profile picture display size Default: <default-value> |
plugin_sponsors |
Optional | Enable sponsors plugin Default: <default-value> |
plugin_sponsors_sections |
Optional | Displayed sections Default: <default-value> |
plugin_sponsors_past |
Optional | Past sponsorships Default: <default-value> |
plugin_sponsors_size |
Optional | Profile picture display size Default: <default-value> |
plugin_sponsors_title |
Optional | Title caption Default: <default-value> |
plugin_repositories |
Optional | Enable repositories plugin Default: <default-value> |
plugin_repositories_featured |
Optional | Featured repositories Default: <default-value> |
plugin_repositories_pinned |
Optional | Pinned repositories Default: <default-value> |
plugin_repositories_starred |
Optional | Featured most starred repositories Default: <default-value> |
plugin_repositories_random |
Optional | Featured random repositories Default: <default-value> |
plugin_repositories_order |
Optional | Featured repositories display order Default: <default-value> |
plugin_repositories_forks |
Optional | Include repositories forks Default: <default-value> |
plugin_repositories_affiliations |
Optional | Repositories affiliations Default: <default-value> |
plugin_discussions |
Optional | Enable discussions plugin Default: <default-value> |
plugin_discussions_categories |
Optional | Discussion categories Default: <default-value> |
plugin_discussions_categories_limit |
Optional | Display limit (categories) Default: <default-value> |
plugin_starlists |
Optional | Enable starlists plugin Default: <default-value> |
plugin_starlists_limit |
Optional | Display limit (star lists) Default: <default-value> |
plugin_starlists_limit_repositories |
Optional | Display limit (repositories per star list) Default: <default-value> |
plugin_starlists_languages |
Optional | Star lists languages statistics Default: <default-value> |
plugin_starlists_limit_languages |
Optional | Display limit (languages per star list) Default: <default-value> |
plugin_starlists_languages_ignored |
Optional | Ignored languages in star lists Default: <default-value> |
plugin_starlists_languages_aliases |
Optional | Custom languages names in star lists Default: <default-value> |
plugin_starlists_shuffle_repositories |
Optional | Shuffle data Default: <default-value> |
plugin_starlists_ignored |
Optional | Skipped star lists Default: <default-value> |
plugin_starlists_only |
Optional | Showcased star lists Default: <default-value> |
plugin_calendar |
Optional | Enable calendar plugin Default: <default-value> |
plugin_calendar_limit |
Optional | Years to display Default: <default-value> |
plugin_achievements |
Optional | Enable achievements plugin Default: <default-value> |
plugin_achievements_threshold |
Optional | Rank threshold filter Default: <default-value> |
plugin_achievements_secrets |
Optional | Secrets achievements Default: <default-value> |
plugin_achievements_display |
Optional | Display style Default: <default-value> |
plugin_achievements_limit |
Optional | Display limit Default: <default-value> |
plugin_achievements_ignored |
Optional | Ignored achievements Default: <default-value> |
plugin_achievements_only |
Optional | Showcased achievements Default: <default-value> |
plugin_notable |
Optional | Enable notable plugin Default: <default-value> |
plugin_notable_filter |
Optional | Query filter Default: <default-value> |
plugin_notable_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_notable_from |
Optional | Repository owner account type filter Default: <default-value> |
plugin_notable_repositories |
Optional | Repository name Default: <default-value> |
plugin_notable_indepth |
Optional | Indepth mode Default: <default-value> |
plugin_notable_types |
Optional | Contribution types filter Default: <default-value> |
plugin_notable_self |
Optional | Include own repositories Default: <default-value> |
plugin_activity |
Optional | Enable activity plugin Default: <default-value> |
plugin_activity_limit |
Optional | Display limit Default: <default-value> |
plugin_activity_load |
Optional | Events to load Default: <default-value> |
plugin_activity_days |
Optional | Events maximum age Default: <default-value> |
plugin_activity_visibility |
Optional | Events visibility Default: <default-value> |
plugin_activity_timestamps |
Optional | Events timestamps Default: <default-value> |
plugin_activity_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_activity_ignored |
Optional | Ignored users Default: <default-value> |
plugin_activity_filter |
Optional | Events types Default: <default-value> |
plugin_traffic |
Optional | Enable traffic plugin Default: <default-value> |
plugin_traffic_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_code |
Optional | Enable code plugin Default: <default-value> |
plugin_code_lines |
Optional | Display limit (lines per code snippets) Default: <default-value> |
plugin_code_load |
Optional | Events to load Default: <default-value> |
plugin_code_days |
Optional | Events maximum age Default: <default-value> |
plugin_code_visibility |
Optional | Events visibility Default: <default-value> |
plugin_code_skipped |
Optional | Skipped repositories Default: <default-value> |
plugin_code_languages |
Optional | Showcased languages Default: <default-value> |
plugin_gists |
Optional | Enable gists plugin Default: <default-value> |
plugin_projects |
Optional | Enable projects plugin Default: <default-value> |
plugin_projects_limit |
Optional | Display limit Default: <default-value> |
plugin_projects_repositories |
Optional | Featured repositories projects Default: <default-value> |
plugin_projects_descriptions |
Optional | Projects descriptions Default: <default-value> |
plugin_introduction |
Optional | Enable introduction plugin Default: <default-value> |
plugin_introduction_title |
Optional | Section title Default: <default-value> |
plugin_skyline |
Optional | Enable skyline plugin Default: <default-value> |
plugin_skyline_year |
Optional | Displayed year Default: <default-value> |
plugin_skyline_frames |
Optional | Frames count Default: <default-value> |
plugin_skyline_quality |
Optional | Image quality Default: <default-value> |
plugin_skyline_compatibility |
Optional | Compatibility mode Default: <default-value> |
plugin_skyline_settings |
Optional | Advanced settings Default: <default-value> |
plugin_support |
Optional | Enable support plugin Default: <default-value> |
plugin_pagespeed |
Optional | Enable pagespeed plugin Default: <default-value> |
plugin_pagespeed_token |
Optional | PageSpeed token Default: <default-value> |
plugin_pagespeed_url |
Optional | Audited website Default: <default-value> |
plugin_pagespeed_detailed |
Optional | Detailed results Default: <default-value> |
plugin_pagespeed_screenshot |
Optional | Website screenshot Default: <default-value> |
plugin_pagespeed_pwa |
Optional | PWA Status Default: <default-value> |
plugin_tweets |
Optional | Enable tweets plugin Default: <default-value> |
plugin_tweets_token |
Optional | Twitter API token Default: <default-value> |
plugin_tweets_user |
Optional | Twitter username Default: <default-value> |
plugin_tweets_attachments |
Optional | Tweets attachments Default: <default-value> |
plugin_tweets_limit |
Optional | Display limit Default: <default-value> |
plugin_stackoverflow |
Optional | Enable stackoverflow plugin Default: <default-value> |
plugin_stackoverflow_user |
Optional | Stackoverflow user id Default: <default-value> |
plugin_stackoverflow_sections |
Optional | Displayed sections Default: <default-value> |
plugin_stackoverflow_limit |
Optional | Display limit (entries per section) Default: <default-value> |
plugin_stackoverflow_lines |
Optional | Display limit (lines per questions and answers) Default: <default-value> |
plugin_stackoverflow_lines_snippet |
Optional | Display limit (lines per code snippets) Default: <default-value> |
plugin_anilist |
Optional | Enable aniList plugin Default: <default-value> |
plugin_anilist_user |
Optional | AniList login Default: <default-value> |
plugin_anilist_medias |
Optional | Medias types Default: <default-value> |
plugin_anilist_sections |
Optional | Displayed sections Default: <default-value> |
plugin_anilist_limit |
Optional | Display limit (medias) Default: <default-value> |
plugin_anilist_limit_characters |
Optional | Display limit (characters) Default: <default-value> |
plugin_anilist_shuffle |
Optional | Shuffle data Default: <default-value> |
plugin_music |
Optional | Enable music plugin Default: <default-value> |
plugin_music_provider |
Optional | Music provider Default: <default-value> |
plugin_music_token |
Optional | Music provider token Default: <default-value> |
plugin_music_user |
Optional | Music provider username Default: <default-value> |
plugin_music_mode |
Optional | Display mode Default: <default-value> |
plugin_music_playlist |
Optional | Playlist URL Default: <default-value> |
plugin_music_limit |
Optional | Display limit Default: <default-value> |
plugin_music_played_at |
Optional | Recently played - Last played timestamp Default: <default-value> |
plugin_music_time_range |
Optional | Top tracks - Time range Default: <default-value> |
plugin_music_top_type |
Optional | Top tracks - Display type Default: <default-value> |
plugin_posts |
Optional | Enable posts plugin Default: <default-value> |
plugin_posts_source |
Optional | External source Default: <default-value> |
plugin_posts_user |
Optional | External source username Default: <default-value> |
plugin_posts_descriptions |
Optional | Posts descriptions Default: <default-value> |
plugin_posts_covers |
Optional | Posts cover images Default: <default-value> |
plugin_posts_limit |
Optional | Display limit Default: <default-value> |
plugin_rss |
Optional | Enable rss plugin Default: <default-value> |
plugin_rss_source |
Optional | RSS feed source Default: <default-value> |
plugin_rss_limit |
Optional | Display limit Default: <default-value> |
plugin_wakatime |
Optional | Enable wakatime plugin Default: <default-value> |
plugin_wakatime_token |
Optional | WakaTime API token Default: <default-value> |
plugin_wakatime_url |
Optional | WakaTime URL Default: <default-value> |
plugin_wakatime_user |
Optional | WakaTime username Default: <default-value> |
plugin_wakatime_sections |
Optional | Displayed sections Default: <default-value> |
plugin_wakatime_days |
Optional | Time range Default: <default-value> |
plugin_wakatime_limit |
Optional | Display limit (entries per graph) Default: <default-value> |
plugin_wakatime_languages_other |
Optional | Other languages Default: <default-value> |
plugin_wakatime_languages_ignored |
Optional | Ignored languages Default: <default-value> |
plugin_wakatime_repositories_visibility |
Optional | Repositories visibility Default: <default-value> |
plugin_leetcode |
Optional | Enable leetcode plugin Default: <default-value> |
plugin_leetcode_user |
Optional | LeetCode login Default: <default-value> |
plugin_leetcode_sections |
Optional | Displayed sections Default: <default-value> |
plugin_leetcode_limit_skills |
Optional | Display limit (skills) Default: <default-value> |
plugin_leetcode_ignored_skills |
Optional | Ignored skills Default: <default-value> |
plugin_leetcode_limit_recent |
Optional | Display limit (recent) Default: <default-value> |
plugin_steam |
Optional | Enable steam plugin Default: <default-value> |
plugin_steam_token |
Optional | Steam token Default: <default-value> |
plugin_steam_sections |
Optional | Displayed sections Default: <default-value> |
plugin_steam_user |
Optional | Steam user id |
plugin_steam_games_ignored |
Optional | Ignored games Default: <default-value> |
plugin_steam_games_limit |
Optional | Display limit (Most played games) Default: <default-value> |
plugin_steam_recent_games_limit |
Optional | Display limit (Recently played games) Default: <default-value> |
plugin_steam_achievements_limit |
Optional | Display limit (Games achievements) Default: <default-value> |
plugin_steam_playtime_threshold |
Optional | Display threshold (Game playtime in hours) Default: <default-value> |
plugin_16personalities |
Optional | Enable 16personalities plugin Default: <default-value> |
plugin_16personalities_url |
Optional | Profile URL Default: <default-value> |
plugin_16personalities_sections |
Optional | Displayed sections Default: <default-value> |
plugin_16personalities_scores |
Optional | Display traits scores Default: <default-value> |
plugin_chess |
Optional | Enable chess plugin Default: <default-value> |
plugin_chess_token |
Optional | Chess platform token Default: <default-value> |
plugin_chess_user |
Optional | Chess platform login Default: <default-value> |
plugin_chess_platform |
Optional | Chess platform Default: <default-value> |
plugin_chess_animation |
Optional | Animation settings Default: <default-value> |
plugin_crypto |
Optional | Enable crypto plugin Default: <default-value> |
plugin_crypto_id |
Optional | Crypto id Default: <default-value> |
plugin_crypto_vs_currency |
Optional | The target currency of market data (usd, eur, jpy, etc.) Default: <default-value> |
plugin_crypto_days |
Optional | Data up to number of days ago (eg. 1,14,30,max) Default: <default-value> |
plugin_crypto_precision |
Optional | The number of decimal places to use Default: <default-value> |
plugin_fortune |
Optional | Enable fortune plugin Default: <default-value> |
plugin_nightscout |
Optional | Enable nightscout plugin Default: <default-value> |
plugin_nightscout_url |
Optional | Nightscout URL Default: <default-value> |
plugin_nightscout_datapoints |
Optional | Number of datapoints shown the graph Default: <default-value> |
plugin_nightscout_lowalert |
Optional | Threshold for low blood sugar Default: <default-value> |
plugin_nightscout_highalert |
Optional | Threshold for high blood sugar Default: <default-value> |
plugin_nightscout_urgentlowalert |
Optional | Threshold for urgently low blood sugar Default: <default-value> |
plugin_nightscout_urgenthighalert |
Optional | Threshold for urgently high blood sugar Default: <default-value> |
plugin_poopmap |
Optional | Enable poopmap plugin Default: <default-value> |
plugin_poopmap_token |
Optional | PoopMap API token Default: <default-value> |
plugin_poopmap_days |
Optional | Time range Default: <default-value> |
plugin_screenshot |
Optional | Enable screenshot plugin Default: <default-value> |
plugin_screenshot_title |
Optional | Title caption Default: <default-value> |
plugin_screenshot_url |
Optional | Website URL Default: <default-value> |
plugin_screenshot_selector |
Optional | CSS Selector Default: <default-value> |
plugin_screenshot_mode |
Optional | Output mode Default: <default-value> |
plugin_screenshot_viewport |
Optional | Viewport options Default: <default-value> |
plugin_screenshot_wait |
Optional | Wait time before taking screenshot (ms) Default: <default-value> |
plugin_screenshot_background |
Optional | Background Default: <default-value> |
plugin_splatoon |
Optional | Enable splatoon plugin Default: <default-value> |
plugin_splatoon_token |
Optional | Splatnet token Default: <default-value> |
plugin_splatoon_sections |
Optional | Displayed sections Default: <default-value> |
plugin_splatoon_versus_limit |
Optional | Display limit (Versus) Default: <default-value> |
plugin_splatoon_salmon_limit |
Optional | Display limit (Salmon run) Default: <default-value> |
plugin_splatoon_statink |
Optional | stat.ink integration Default: <default-value> |
plugin_splatoon_statink_token |
Optional | stat.ink token Default: <default-value> |
plugin_splatoon_source |
Optional | Source Default: <default-value> |
plugin_stock |
Optional | Enable stock plugin Default: <default-value> |
plugin_stock_token |
Optional | Yahoo Finance token Default: <default-value> |
plugin_stock_symbol |
Optional | Company stock symbol Default: <default-value> |
plugin_stock_duration |
Optional | Time range Default: <default-value> |
plugin_stock_interval |
Optional | Time interval between points Default: <default-value> |
# ====================================================================================
# Inputs and configuration
inputs:
# ====================================================================================
# 🗃️ Base content
base:
description: Base content
default: <default-value>
base_indepth:
description: Indepth mode
default: <default-value>
base_hireable:
description: Show `Available for hire!` in header section
default: <default-value>
base_skip:
description: Skip base content
default: <default-value>
repositories:
description: Fetched repositories
default: <default-value>
repositories_batch:
description: Fetched repositories per query
default: <default-value>
repositories_forks:
description: Include forks
default: <default-value>
repositories_affiliations:
description: Repositories affiliations
default: <default-value>
repositories_skipped:
description: Default skipped repositories
default: <default-value>
users_ignored:
description: Default ignored users
default: <default-value>
commits_authoring:
description: Identifiers that has been used for authoring commits
default: <default-value>
# ====================================================================================
# 🧱 Core
token:
description: GitHub Personal Access Token
required: true
user:
description: GitHub username
default: <default-value>
repo:
description: GitHub repository
default: <default-value>
committer_token:
description: GitHub Token used to commit metrics
default: ${{ github.token }}
committer_branch:
description: Target branch
default: <default-value>
committer_message:
description: Commit message
default: <default-value>
committer_gist:
description: Gist id
default: <default-value>
filename:
description: Output path
default: <default-value>
markdown:
description: Markdown template path
default: <default-value>
markdown_cache:
description: Markdown file cache
default: <default-value>
output_action:
description: Output action
default: <default-value>
output_condition:
description: Output condition
default: <default-value>
optimize:
description: Optimization features
default: <default-value>
setup_community_templates:
description: Community templates to setup
default: <default-value>
template:
description: Template
default: <default-value>
query:
description: Query parameters
default: <default-value>
extras_css:
description: Extra CSS
default: <default-value>
extras_js:
description: Extra JavaScript
default: <default-value>
github_api_rest:
description: GitHub REST API endpoint
default: <default-value>
github_api_graphql:
description: GitHub GraphQL API endpoint
default: <default-value>
config_timezone:
description: Timezone for dates
default: ""
config_order:
description: Plugin order
default: <default-value>
config_twemoji:
description: Use twemojis
default: <default-value>
config_gemoji:
description: Use GitHub custom emojis
default: <default-value>
config_octicon:
description: Use GitHub octicons
default: <default-value>
config_display:
description: Display width (for image output formats)
default: <default-value>
config_animations:
description: Use CSS animations
default: <default-value>
config_base64:
description: Base64-encoded images
default: <default-value>
config_padding:
description: Output padding
default: <default-value>
config_output:
description: Output format
default: <default-value>
config_presets:
description: Configuration presets
default: ""
retries:
description: Retries in case of failures (for rendering)
default: <default-value>
retries_delay:
description: Delay between each retry (in seconds, for rendering)
default: <default-value>
retries_output_action:
description: Retries in case of failures (for output action)
default: <default-value>
retries_delay_output_action:
description: Delay between each retry (in seconds, for output action)
default: <default-value>
clean_workflows:
description: Clean previous workflows jobs
default: <default-value>
delay:
description: Job delay
default: <default-value>
quota_required_rest:
description: Minimum GitHub REST API requests quota required to run
default: <default-value>
quota_required_graphql:
description: Minimum GitHub GraphQL API requests quota required to run
default: <default-value>
quota_required_search:
description: Minimum GitHub Search API requests quota required to run
default: <default-value>
notice_releases:
description: Notice about new releases of metrics
default: <default-value>
use_prebuilt_image:
description: >-
Use pre-built docker image from [GitHub container
registry](https://github.com/lowlighter/metrics/pkgs/container/metrics)
default: yes
plugins_errors_fatal:
description: Fatal plugin errors
default: <default-value>
debug:
description: Debug mode
default: <default-value>
verify:
description: SVG validity check
default: <default-value>
debug_flags:
description: Debug flags
default: <default-value>
debug_print:
description: Print output in console
default: <default-value>
dryrun:
description: Dry-run
default: <default-value>
experimental_features:
description: Experimental features
default: <default-value>
use_mocked_data:
description: Use mocked data instead of live APIs
default: <default-value>
# ====================================================================================
# 📅 Isometric commit calendar
plugin_isocalendar:
description: Enable isocalendar plugin
default: <default-value>
plugin_isocalendar_duration:
description: Time range
default: <default-value>
# ====================================================================================
# 🈷️ Languages activity
plugin_languages:
description: Enable languages plugin
default: <default-value>
plugin_languages_ignored:
description: Ignored languages
default: <default-value>
plugin_languages_skipped:
description: Skipped repositories
default: <default-value>
plugin_languages_limit:
description: Display limit
default: <default-value>
plugin_languages_threshold:
description: Display threshold (percentage)
default: <default-value>
plugin_languages_other:
description: Group unknown, ignored and over-limit languages into "Other" category
default: <default-value>
plugin_languages_colors:
description: Custom languages colors
default: <default-value>
plugin_languages_aliases:
description: Custom languages names
default: <default-value>
plugin_languages_sections:
description: Displayed sections
default: <default-value>
plugin_languages_details:
description: Additional details
default: <default-value>
plugin_languages_indepth:
description: Indepth mode
default: <default-value>
plugin_languages_indepth_custom:
description: Indepth mode - Custom repositories
default: <default-value>
plugin_languages_analysis_timeout:
description: Indepth mode - Analysis timeout
default: <default-value>
plugin_languages_analysis_timeout_repositories:
description: Indepth mode - Analysis timeout (repositories)
default: <default-value>
plugin_languages_categories:
description: Indepth mode - Displayed categories (most-used section)
default: <default-value>
plugin_languages_recent_categories:
description: Indepth mode - Displayed categories (recently-used section)
default: <default-value>
plugin_languages_recent_load:
description: Indepth mode - Events to load (recently-used section)
default: <default-value>
plugin_languages_recent_days:
description: Indepth mode - Events maximum age (day, recently-used section)
default: <default-value>
# ====================================================================================
# ✨ Stargazers
plugin_stargazers:
description: Enable stargazers plugin
default: <default-value>
plugin_stargazers_days:
description: Time range
default: <default-value>
plugin_stargazers_charts:
description: Charts
default: <default-value>
plugin_stargazers_charts_type:
description: Charts display type
default: <default-value>
plugin_stargazers_worldmap:
description: Stargazers worldmap
default: <default-value>
plugin_stargazers_worldmap_token:
description: Stargazers worldmap token
default: <default-value>
plugin_stargazers_worldmap_sample:
description: Stargazers worldmap sample
default: <default-value>
# ====================================================================================
# 👨💻 Lines of code changed
plugin_lines:
description: Enable lines plugin
default: <default-value>
plugin_lines_skipped:
description: Skipped repositories
default: <default-value>
plugin_lines_sections:
description: Displayed sections
default: <default-value>
plugin_lines_repositories_limit:
description: Display limit
default: <default-value>
plugin_lines_history_limit:
description: Years to display
default: <default-value>
plugin_lines_delay:
description: Delay before performing a second query
default: <default-value>
# ====================================================================================
# 📌 Starred topics
plugin_topics:
description: Enable topics plugin
default: <default-value>
plugin_topics_mode:
description: Display mode
default: <default-value>
plugin_topics_sort:
description: Sorting method
default: <default-value>
plugin_topics_limit:
description: Display limit
default: <default-value>
# ====================================================================================
# 🌟 Recently starred repositories
plugin_stars:
description: Enable stars plugin
default: <default-value>
plugin_stars_limit:
description: Display limit
default: <default-value>
# ====================================================================================
# 📜 Repository licenses
plugin_licenses:
description: Enable licenses plugin
default: <default-value>
plugin_licenses_setup:
description: Setup command
default: <default-value>
plugin_licenses_ratio:
description: Used licenses ratio
default: <default-value>
plugin_licenses_legal:
description: Permissions, limitations and conditions about used licenses
default: <default-value>
# ====================================================================================
# 💡 Coding habits and activity
plugin_habits:
description: Enable habits plugin
default: <default-value>
plugin_habits_from:
description: Events to use
default: <default-value>
plugin_habits_skipped:
description: Skipped repositories
default: <default-value>
plugin_habits_days:
description: Event maximum age
default: <default-value>
plugin_habits_facts:
description: Mildly interesting facts
default: <default-value>
plugin_habits_charts:
description: Charts
default: <default-value>
plugin_habits_charts_type:
description: Charts display type
default: <default-value>
plugin_habits_trim:
description: Trim unused hours on charts
default: <default-value>
plugin_habits_languages_limit:
description: Display limit (languages)
default: <default-value>
plugin_habits_languages_threshold:
description: Display threshold (percentage)
default: <default-value>
# ====================================================================================
# 🏅 Repository contributors
plugin_contributors:
description: Enable contributors plugin
default: <default-value>
plugin_contributors_base:
description: Base reference
default: <default-value>
plugin_contributors_head:
description: Head reference
default: <default-value>
plugin_contributors_ignored:
description: Ignored users
default: <default-value>
plugin_contributors_contributions:
description: Contributions count
default: <default-value>
plugin_contributors_sections:
description: Displayed sections
default: <default-value>
plugin_contributors_categories:
description: Contribution categories
default: <default-value>
# ====================================================================================
# 🎟️ Follow-up of issues and pull requests
plugin_followup:
description: Enable followup plugin
default: <default-value>
plugin_followup_sections:
description: Displayed sections
default: <default-value>
plugin_followup_indepth:
description: Indepth analysis
default: <default-value>
plugin_followup_archived:
description: Include archived repositories
default: <default-value>
# ====================================================================================
# 🎭 Comment reactions
plugin_reactions:
description: Enable reactions plugin
default: <default-value>
plugin_reactions_limit:
description: Display limit (issues and pull requests comments)
default: <default-value>
plugin_reactions_limit_issues:
description: Display limit (issues and pull requests, first comment)
default: <default-value>
plugin_reactions_limit_discussions:
description: Display limit (discussions, first comment)
default: <default-value>
plugin_reactions_limit_discussions_comments:
description: Display limit (discussions comments)
default: <default-value>
plugin_reactions_days:
description: Comments maximum age
default: <default-value>
plugin_reactions_display:
description: Display mode
default: <default-value>
plugin_reactions_details:
description: Additional details
default: <default-value>
plugin_reactions_ignored:
description: Ignored users
default: <default-value>
# ====================================================================================
# 🧑🤝🧑 People
plugin_people:
description: Enable people plugin
default: <default-value>
plugin_people_limit:
description: Display limit
default: <default-value>
plugin_people_identicons:
description: Force identicons pictures
default: <default-value>
plugin_people_identicons_hide:
description: Hide identicons pictures
default: <default-value>
plugin_people_size:
description: Profile picture display size
default: <default-value>
plugin_people_types:
description: Displayed sections
default: <default-value>
plugin_people_thanks:
description: Special thanks
default: <default-value>
plugin_people_sponsors_custom:
description: Custom sponsors
default: <default-value>
plugin_people_shuffle:
description: Shuffle data
default: <default-value>
# ====================================================================================
# 💝 GitHub Sponsorships
plugin_sponsorships:
description: Enable sponsorships plugin
default: <default-value>
plugin_sponsorships_sections:
description: Displayed sections
default: <default-value>
plugin_sponsorships_size:
description: Profile picture display size
default: <default-value>
# ====================================================================================
# 💕 GitHub Sponsors
plugin_sponsors:
description: Enable sponsors plugin
default: <default-value>
plugin_sponsors_sections:
description: Displayed sections
default: <default-value>
plugin_sponsors_past:
description: Past sponsorships
default: <default-value>
plugin_sponsors_size:
description: Profile picture display size
default: <default-value>
plugin_sponsors_title:
description: Title caption
default: <default-value>
# ====================================================================================
# 📓 Featured repositories
plugin_repositories:
description: Enable repositories plugin
default: <default-value>
plugin_repositories_featured:
description: Featured repositories
default: <default-value>
plugin_repositories_pinned:
description: Pinned repositories
default: <default-value>
plugin_repositories_starred:
description: Featured most starred repositories
default: <default-value>
plugin_repositories_random:
description: Featured random repositories
default: <default-value>
plugin_repositories_order:
description: Featured repositories display order
default: <default-value>
plugin_repositories_forks:
description: Include repositories forks
default: <default-value>
plugin_repositories_affiliations:
description: Repositories affiliations
default: <default-value>
# ====================================================================================
# 💬 Discussions
plugin_discussions:
description: Enable discussions plugin
default: <default-value>
plugin_discussions_categories:
description: Discussion categories
default: <default-value>
plugin_discussions_categories_limit:
description: Display limit (categories)
default: <default-value>
# ====================================================================================
# 💫 Star lists
plugin_starlists:
description: Enable starlists plugin
default: <default-value>
plugin_starlists_limit:
description: Display limit (star lists)
default: <default-value>
plugin_starlists_limit_repositories:
description: Display limit (repositories per star list)
default: <default-value>
plugin_starlists_languages:
description: Star lists languages statistics
default: <default-value>
plugin_starlists_limit_languages:
description: Display limit (languages per star list)
default: <default-value>
plugin_starlists_languages_ignored:
description: Ignored languages in star lists
default: <default-value>
plugin_starlists_languages_aliases:
description: Custom languages names in star lists
default: <default-value>
plugin_starlists_shuffle_repositories:
description: Shuffle data
default: <default-value>
plugin_starlists_ignored:
description: Skipped star lists
default: <default-value>
plugin_starlists_only:
description: Showcased star lists
default: <default-value>
# ====================================================================================
# 📆 Commit calendar
plugin_calendar:
description: Enable calendar plugin
default: <default-value>
plugin_calendar_limit:
description: Years to display
default: <default-value>
# ====================================================================================
# 🏆 Achievements
plugin_achievements:
description: Enable achievements plugin
default: <default-value>
plugin_achievements_threshold:
description: Rank threshold filter
default: <default-value>
plugin_achievements_secrets:
description: Secrets achievements
default: <default-value>
plugin_achievements_display:
description: Display style
default: <default-value>
plugin_achievements_limit:
description: Display limit
default: <default-value>
plugin_achievements_ignored:
description: Ignored achievements
default: <default-value>
plugin_achievements_only:
description: Showcased achievements
default: <default-value>
# ====================================================================================
# 🎩 Notable contributions
plugin_notable:
description: Enable notable plugin
default: <default-value>
plugin_notable_filter:
description: Query filter
default: <default-value>
plugin_notable_skipped:
description: Skipped repositories
default: <default-value>
plugin_notable_from:
description: Repository owner account type filter
default: <default-value>
plugin_notable_repositories:
description: Repository name
default: <default-value>
plugin_notable_indepth:
description: Indepth mode
default: <default-value>
plugin_notable_types:
description: Contribution types filter
default: <default-value>
plugin_notable_self:
description: Include own repositories
default: <default-value>
# ====================================================================================
# 📰 Recent activity
plugin_activity:
description: Enable activity plugin
default: <default-value>
plugin_activity_limit:
description: Display limit
default: <default-value>
plugin_activity_load:
description: Events to load
default: <default-value>
plugin_activity_days:
description: Events maximum age
default: <default-value>
plugin_activity_visibility:
description: Events visibility
default: <default-value>
plugin_activity_timestamps:
description: Events timestamps
default: <default-value>
plugin_activity_skipped:
description: Skipped repositories
default: <default-value>
plugin_activity_ignored:
description: Ignored users
default: <default-value>
plugin_activity_filter:
description: Events types
default: <default-value>
# ====================================================================================
# 🧮 Repositories traffic
plugin_traffic:
description: Enable traffic plugin
default: <default-value>
plugin_traffic_skipped:
description: Skipped repositories
default: <default-value>
# ====================================================================================
# ♐ Random code snippet
plugin_code:
description: Enable code plugin
default: <default-value>
plugin_code_lines:
description: Display limit (lines per code snippets)
default: <default-value>
plugin_code_load:
description: Events to load
default: <default-value>
plugin_code_days:
description: Events maximum age
default: <default-value>
plugin_code_visibility:
description: Events visibility
default: <default-value>
plugin_code_skipped:
description: Skipped repositories
default: <default-value>
plugin_code_languages:
description: Showcased languages
default: <default-value>
# ====================================================================================
# 🎫 Gists
plugin_gists:
description: Enable gists plugin
default: <default-value>
# ====================================================================================
# 🗂️ GitHub projects
plugin_projects:
description: Enable projects plugin
default: <default-value>
plugin_projects_limit:
description: Display limit
default: <default-value>
plugin_projects_repositories:
description: Featured repositories projects
default: <default-value>
plugin_projects_descriptions:
description: Projects descriptions
default: <default-value>
# ====================================================================================
# 🙋 Introduction
plugin_introduction:
description: Enable introduction plugin
default: <default-value>
plugin_introduction_title:
description: Section title
default: <default-value>
# ====================================================================================
# 🌇 GitHub Skyline
plugin_skyline:
description: Enable skyline plugin
default: <default-value>
plugin_skyline_year:
description: Displayed year
default: <default-value>
plugin_skyline_frames:
description: Frames count
default: <default-value>
plugin_skyline_quality:
description: Image quality
default: <default-value>
plugin_skyline_compatibility:
description: Compatibility mode
default: <default-value>
plugin_skyline_settings:
description: Advanced settings
default: <default-value>
# ====================================================================================
# 💭 GitHub Community Support
plugin_support:
description: Enable support plugin
default: <default-value>
# ====================================================================================
# ⏱️ Google PageSpeed
plugin_pagespeed:
description: Enable pagespeed plugin
default: <default-value>
plugin_pagespeed_token:
description: PageSpeed token
default: <default-value>
plugin_pagespeed_url:
description: Audited website
default: <default-value>
plugin_pagespeed_detailed:
description: Detailed results
default: <default-value>
plugin_pagespeed_screenshot:
description: Website screenshot
default: <default-value>
plugin_pagespeed_pwa:
description: PWA Status
default: <default-value>
# ====================================================================================
# 🐤 Latest tweets
plugin_tweets:
description: Enable tweets plugin
default: <default-value>
plugin_tweets_token:
description: Twitter API token
default: <default-value>
plugin_tweets_user:
description: Twitter username
default: <default-value>
plugin_tweets_attachments:
description: Tweets attachments
default: <default-value>
plugin_tweets_limit:
description: Display limit
default: <default-value>
# ====================================================================================
# 🗨️ Stack Overflow
plugin_stackoverflow:
description: Enable stackoverflow plugin
default: <default-value>
plugin_stackoverflow_user:
description: Stackoverflow user id
default: <default-value>
plugin_stackoverflow_sections:
description: Displayed sections
default: <default-value>
plugin_stackoverflow_limit:
description: Display limit (entries per section)
default: <default-value>
plugin_stackoverflow_lines:
description: Display limit (lines per questions and answers)
default: <default-value>
plugin_stackoverflow_lines_snippet:
description: Display limit (lines per code snippets)
default: <default-value>
# ====================================================================================
# 🌸 Anilist watch list and reading list
plugin_anilist:
description: Enable aniList plugin
default: <default-value>
plugin_anilist_user:
description: AniList login
default: <default-value>
plugin_anilist_medias:
description: Medias types
default: <default-value>
plugin_anilist_sections:
description: Displayed sections
default: <default-value>
plugin_anilist_limit:
description: Display limit (medias)
default: <default-value>
plugin_anilist_limit_characters:
description: Display limit (characters)
default: <default-value>
plugin_anilist_shuffle:
description: Shuffle data
default: <default-value>
# ====================================================================================
# 🎼 Music activity and suggestions
plugin_music:
description: Enable music plugin
default: <default-value>
plugin_music_provider:
description: Music provider
default: <default-value>
plugin_music_token:
description: Music provider token
default: <default-value>
plugin_music_user:
description: Music provider username
default: <default-value>
plugin_music_mode:
description: Display mode
default: <default-value>
plugin_music_playlist:
description: Playlist URL
default: <default-value>
plugin_music_limit:
description: Display limit
default: <default-value>
plugin_music_played_at:
description: Recently played - Last played timestamp
default: <default-value>
plugin_music_time_range:
description: Top tracks - Time range
default: <default-value>
plugin_music_top_type:
description: Top tracks - Display type
default: <default-value>
# ====================================================================================
# ✒️ Recent posts
plugin_posts:
description: Enable posts plugin
default: <default-value>
plugin_posts_source:
description: External source
default: <default-value>
plugin_posts_user:
description: External source username
default: <default-value>
plugin_posts_descriptions:
description: Posts descriptions
default: <default-value>
plugin_posts_covers:
description: Posts cover images
default: <default-value>
plugin_posts_limit:
description: Display limit
default: <default-value>
# ====================================================================================
# 🗼 Rss feed
plugin_rss:
description: Enable rss plugin
default: <default-value>
plugin_rss_source:
description: RSS feed source
default: <default-value>
plugin_rss_limit:
description: Display limit
default: <default-value>
# ====================================================================================
# ⏰ WakaTime
plugin_wakatime:
description: Enable wakatime plugin
default: <default-value>
plugin_wakatime_token:
description: WakaTime API token
default: <default-value>
plugin_wakatime_url:
description: WakaTime URL
default: <default-value>
plugin_wakatime_user:
description: WakaTime username
default: <default-value>
plugin_wakatime_sections:
description: Displayed sections
default: <default-value>
plugin_wakatime_days:
description: Time range
default: <default-value>
plugin_wakatime_limit:
description: Display limit (entries per graph)
default: <default-value>
plugin_wakatime_languages_other:
description: Other languages
default: <default-value>
plugin_wakatime_languages_ignored:
description: Ignored languages
default: <default-value>
plugin_wakatime_repositories_visibility:
description: Repositories visibility
default: <default-value>
# ====================================================================================
# 🗳️ Leetcode
plugin_leetcode:
description: Enable leetcode plugin
default: <default-value>
plugin_leetcode_user:
description: LeetCode login
default: <default-value>
plugin_leetcode_sections:
description: Displayed sections
default: <default-value>
plugin_leetcode_limit_skills:
description: Display limit (skills)
default: <default-value>
plugin_leetcode_ignored_skills:
description: Ignored skills
default: <default-value>
plugin_leetcode_limit_recent:
description: Display limit (recent)
default: <default-value>
# ====================================================================================
# 🕹️ Steam
plugin_steam:
description: Enable steam plugin
default: <default-value>
plugin_steam_token:
description: Steam token
default: <default-value>
plugin_steam_sections:
description: Displayed sections
default: <default-value>
plugin_steam_user:
description: Steam user id
plugin_steam_games_ignored:
description: Ignored games
default: <default-value>
plugin_steam_games_limit:
description: Display limit (Most played games)
default: <default-value>
plugin_steam_recent_games_limit:
description: Display limit (Recently played games)
default: <default-value>
plugin_steam_achievements_limit:
description: Display limit (Games achievements)
default: <default-value>
plugin_steam_playtime_threshold:
description: Display threshold (Game playtime in hours)
default: <default-value>
# ====================================================================================
# 🧠 16personalities
plugin_16personalities:
description: Enable 16personalities plugin
default: <default-value>
plugin_16personalities_url:
description: Profile URL
default: <default-value>
plugin_16personalities_sections:
description: Displayed sections
default: <default-value>
plugin_16personalities_scores:
description: Display traits scores
default: <default-value>
# ====================================================================================
# ♟️ Chess
plugin_chess:
description: Enable chess plugin
default: <default-value>
plugin_chess_token:
description: Chess platform token
default: <default-value>
plugin_chess_user:
description: Chess platform login
default: <default-value>
plugin_chess_platform:
description: Chess platform
default: <default-value>
plugin_chess_animation:
description: Animation settings
default: <default-value>
# ====================================================================================
# 🪙 Crypto
plugin_crypto:
description: Enable crypto plugin
default: <default-value>
plugin_crypto_id:
description: Crypto id
default: <default-value>
plugin_crypto_vs_currency:
description: The target currency of market data (usd, eur, jpy, etc.)
default: <default-value>
plugin_crypto_days:
description: Data up to number of days ago (eg. 1,14,30,max)
default: <default-value>
plugin_crypto_precision:
description: The number of decimal places to use
default: <default-value>
# ====================================================================================
# 🥠 Fortune
plugin_fortune:
description: Enable fortune plugin
default: <default-value>
# ====================================================================================
# 💉 Nightscout
plugin_nightscout:
description: Enable nightscout plugin
default: <default-value>
plugin_nightscout_url:
description: Nightscout URL
default: <default-value>
plugin_nightscout_datapoints:
description: Number of datapoints shown the graph
default: <default-value>
plugin_nightscout_lowalert:
description: Threshold for low blood sugar
default: <default-value>
plugin_nightscout_highalert:
description: Threshold for high blood sugar
default: <default-value>
plugin_nightscout_urgentlowalert:
description: Threshold for urgently low blood sugar
default: <default-value>
plugin_nightscout_urgenthighalert:
description: Threshold for urgently high blood sugar
default: <default-value>
# ====================================================================================
# 💩 PoopMap plugin
plugin_poopmap:
description: Enable poopmap plugin
default: <default-value>
plugin_poopmap_token:
description: PoopMap API token
default: <default-value>
plugin_poopmap_days:
description: Time range
default: <default-value>
# ====================================================================================
# 📸 Website screenshot
plugin_screenshot:
description: Enable screenshot plugin
default: <default-value>
plugin_screenshot_title:
description: Title caption
default: <default-value>
plugin_screenshot_url:
description: Website URL
default: <default-value>
plugin_screenshot_selector:
description: CSS Selector
default: <default-value>
plugin_screenshot_mode:
description: Output mode
default: <default-value>
plugin_screenshot_viewport:
description: Viewport options
default: <default-value>
plugin_screenshot_wait:
description: Wait time before taking screenshot (ms)
default: <default-value>
plugin_screenshot_background:
description: Background
default: <default-value>
# ====================================================================================
# 🦑 Splatoon
plugin_splatoon:
description: Enable splatoon plugin
default: <default-value>
plugin_splatoon_token:
description: Splatnet token
default: <default-value>
plugin_splatoon_sections:
description: Displayed sections
default: <default-value>
plugin_splatoon_versus_limit:
description: Display limit (Versus)
default: <default-value>
plugin_splatoon_salmon_limit:
description: Display limit (Salmon run)
default: <default-value>
plugin_splatoon_statink:
description: stat.ink integration
default: <default-value>
plugin_splatoon_statink_token:
description: stat.ink token
default: <default-value>
plugin_splatoon_source:
description: Source
default: <default-value>
# ====================================================================================
# 💹 Stock prices
plugin_stock:
description: Enable stock plugin
default: <default-value>
plugin_stock_token:
description: Yahoo Finance token
default: <default-value>
plugin_stock_symbol:
description: Company stock symbol
default: <default-value>
plugin_stock_duration:
description: Time range
default: <default-value>
plugin_stock_interval:
description: Time interval between points
default: <default-value>
# ====================================================================================
# Action metadata
name: Metrics embed
author: lowlighter
description: An infographics generator with 40+ plugins and 300+ options to display stats about your GitHub account!
branding:
icon: user-check
color: gray-dark
# The action will parse its name to check if it's the official action or if it's a forked one
# On the official action, it'll use the docker image published on GitHub registry when using a released version, allowing faster runs
# On a forked action, it'll rebuild the docker image from Dockerfile to take into account changes you made
runs:
using: composite
steps:
- run: |
# Check runner compatibility
echo "::group::Metrics docker image setup"
echo "GitHub action: $METRICS_ACTION ($METRICS_ACTION_PATH)"
cd $METRICS_ACTION_PATH
for DEPENDENCY in docker jq; do
if ! which $DEPENDENCY > /dev/null 2>&1; then
echo "::error::\"$DEPENDENCY\" is not installed on current runner but is needed to run metrics"
MISSING_DEPENDENCIES=1
fi
done
if [[ $MISSING_DEPENDENCIES == "1" ]]; then
echo "Runner compatibility: missing dependencies"
exit 1
else
echo "Runner compatibility: compatible"
fi
# Create environment file from inputs and GitHub variables
touch .env
for INPUT in $(echo $INPUTS | jq -r 'to_entries|map("INPUT_\(.key|ascii_upcase)=\(.value|@uri)")|.[]'); do
echo $INPUT >> .env
done
env | grep -E '^(GITHUB|ACTIONS|CI|TZ)' >> .env
echo "Environment variables: loaded"
# Renders output folder
METRICS_RENDERS="/metrics_renders"
sudo mkdir -p $METRICS_RENDERS
echo "Renders output folder: $METRICS_RENDERS"
# Source repository (picked from action name)
METRICS_SOURCE=$(echo $METRICS_ACTION | sed -E 's/metrics.*?$//g' | sed -E 's/_//g')
echo "Source: $METRICS_SOURCE"
# Version (picked from package.json)
METRICS_VERSION=$(grep -Po '(?<="version": ").*(?=")' package.json)
echo "Version: $METRICS_VERSION"
# Image tag (extracted from version or from env)
METRICS_TAG=v$(echo $METRICS_VERSION | sed -r 's/^([0-9]+[.][0-9]+).*/\1/')
echo "Image tag: $METRICS_TAG"
# Image name
# Official action
if [[ $METRICS_SOURCE == "lowlighter" ]]; then
# Use registry with pre-built images
if [[ ! $METRICS_USE_PREBUILT_IMAGE =~ ^([Ff]alse|[Oo]ff|[Nn]o|0)$ ]]; then
# Is released version
set +e
METRICS_IS_RELEASED=$(expr $(expr match $METRICS_VERSION .*-beta) == 0)
set -e
echo "Is released version: $METRICS_IS_RELEASED"
if [[ "$METRICS_IS_RELEASED" -eq "0" ]]; then
METRICS_TAG="$METRICS_TAG-beta"
echo "Image tag (updated): $METRICS_TAG"
fi
METRICS_IMAGE=ghcr.io/lowlighter/metrics:$METRICS_TAG
echo "Using pre-built version $METRICS_TAG, will pull docker image from GitHub registry"
if ! docker image pull $METRICS_IMAGE; then
echo "Failed to fetch docker image from GitHub registry, will rebuild it locally"
METRICS_IMAGE=metrics:$METRICS_VERSION
fi
# Rebuild image
else
echo "Using an unreleased version ($METRICS_VERSION)"
METRICS_IMAGE=metrics:$METRICS_VERSION
fi
# Forked action
else
echo "Using a forked version"
METRICS_IMAGE=metrics:forked-$METRICS_VERSION
fi
echo "Image name: $METRICS_IMAGE"
# Build image if necessary
set +e
docker image inspect $METRICS_IMAGE
METRICS_IMAGE_NEEDS_BUILD="$?"
set -e
if [[ "$METRICS_IMAGE_NEEDS_BUILD" -gt "0" ]]; then
echo "Image $METRICS_IMAGE is not present locally, rebuilding it from Dockerfile"
docker build -t $METRICS_IMAGE .
else
echo "Image $METRICS_IMAGE is present locally"
fi
echo "::endgroup::"
# Run docker image with current environment
docker run --init --rm --volume $GITHUB_EVENT_PATH:$GITHUB_EVENT_PATH --volume $METRICS_RENDERS:/renders --env-file .env $METRICS_IMAGE
rm .env
shell: bash
env:
METRICS_ACTION: ${{ github.action }}
METRICS_ACTION_PATH: ${{ github.action_path }}
METRICS_USE_PREBUILT_IMAGE: ${{ inputs.use_prebuilt_image }}
INPUTS: ${{ toJson(inputs) }}
TZ: ${{ inputs.config_timezone }}
Action ID: marketplace/mheap/markdown-meta-action
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/markdown-meta-action
Read front-matter from Markdown files
| Name | Required | Description |
|---|---|---|
file |
Required | The file to read |
name: Markdown Meta
description: Read front-matter from Markdown files
runs:
using: docker
image: Dockerfile
branding:
icon: edit-2
color: orange
inputs:
file:
description: The file to read
required: true
Action ID: marketplace/rhysd/lighthouse-ci-action
Author: Unknown
Publisher: rhysd
Repository: github.com/rhysd/lighthouse-ci-action
Audit URLs using Lighthouse and test performance with Lighthouse CI
| Name | Required | Description |
|---|---|---|
urls |
Optional | List of URL(s) to analyze |
budgetPath |
Optional | Path to a Lighthouse budgets.json file |
configPath |
Optional | Path to a LHCI lighthouserc.json file |
uploadArtifacts |
Optional | Opt-out of saving Lighthouse results as an action artifacts |
artifactName |
Optional | Name of the artifact group if using uploadArtifacts. default: lighthouse-results Default: lighthouse-results |
temporaryPublicStorage |
Optional | |
runs |
Optional | Number of runs to do per URL |
serverBaseUrl |
Optional | Address of a LHCI server |
serverToken |
Optional | API token to push to LHCI server |
basicAuthUsername |
Optional | Basic auth username for LHCI server |
basicAuthPassword |
Optional | Basic auth password for LHCI server |
uploadExtraArgs |
Optional | Upload command extra arguments |
| Name | Description |
|---|---|
resultsPath |
Path to the folder with LHCI results |
links |
Links to compare/result UI for each URL (content of links.json) |
assertionResults |
Assertion results (content of assertion-results.json) |
name: 'Lighthouse CI Action'
description: 'Audit URLs using Lighthouse and test performance with Lighthouse CI'
inputs:
urls:
description: 'List of URL(s) to analyze'
budgetPath:
description: 'Path to a Lighthouse budgets.json file'
configPath:
description: 'Path to a LHCI lighthouserc.json file'
uploadArtifacts:
description: 'Opt-out of saving Lighthouse results as an action artifacts'
artifactName:
description: 'Name of the artifact group if using uploadArtifacts. default: lighthouse-results'
default: lighthouse-results
temporaryPublicStorage:
descripton: 'Opt-in to saving Lighthouse results to temporary public storage'
runs:
description: 'Number of runs to do per URL'
serverBaseUrl:
description: 'Address of a LHCI server'
serverToken:
description: 'API token to push to LHCI server'
basicAuthUsername:
description: 'Basic auth username for LHCI server'
basicAuthPassword:
description: 'Basic auth password for LHCI server'
uploadExtraArgs:
description: 'Upload command extra arguments'
outputs:
resultsPath:
description: 'Path to the folder with LHCI results'
links:
description: 'Links to compare/result UI for each URL (content of links.json)'
assertionResults:
description: 'Assertion results (content of assertion-results.json)'
runs:
using: 'node20'
main: 'src/index.js'
branding:
icon: 'bar-chart-2'
color: 'gray-dark'
Action ID: marketplace/skx/github-action-publish-binaries
Author: Steve Kemp
Publisher: skx
Repository: github.com/skx/github-action-publish-binaries
Publish binaries when new releases are made.
| Name | Required | Description |
|---|---|---|
args |
Required | The pattern of files to upload |
releaseId |
Optional | The release Id of the release to be targeted. |
name: 'github-action-publish-binaries'
description: 'Publish binaries when new releases are made.'
author: 'Steve Kemp'
branding:
icon: save
color: black
inputs:
args:
description: 'The pattern of files to upload'
required: true
releaseId:
description: 'The release Id of the release to be targeted.'
required: false
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.args }}
- ${{ inputs.releaseId }}
Action ID: marketplace/creyD/prettier_action
Author: Conrad Großer <grosserconrad@gmail.com>
Publisher: creyD
Repository: github.com/creyD/prettier_action
Automatically runs prettier on all your changes.
| Name | Required | Description |
|---|---|---|
commit_message |
Optional | Commit message, will be ignored if used with same_commit Default: Prettified Code! |
commit_description |
Optional | Extended commit message, will be ignored if used with same_commit |
same_commit |
Optional | Update the current commit instead of creating a new one |
commit_options |
Optional | Commit options |
push_options |
Optional | Git push options |
file_pattern |
Optional | File pattern used for `git add`, can't be used with only_changed or only_changed_pr! Default: * |
prettier_options |
Optional | Options for the `prettier` command Default: --write **/*.js |
dry |
Optional | Running the script in dry mode just shows whether there are files that should be prettified or not |
no_commit |
Optional | Can be used to avoid committing the changes (can be combined with dry mode, useful when another workflow steps commits after this commit anyways) |
prettier_version |
Optional | Specific version of prettier (by default just use the latest version) Default: latest |
working_directory |
Optional | Specify a directory to cd into before installing prettier and running it Default: ${{ github.action_path }} |
only_changed |
Optional | Only prettify files changed in the last commit, can't be used with file_pattern! |
only_changed_pr |
Optional | Only prettify files changed in the current PR. If specified with only_changed, only_changed will take precedent. Can't be used with file_pattern! |
prettier_plugins |
Optional | Install Prettier plugins, i.e. `@prettier/plugin-php @prettier/plugin-other` |
github_token |
Optional | GitHub Token or PAT token used to authenticate against a repository Default: ${{ github.token }} |
clean_node_folder |
Optional | Remove the node_modules folder before committing changes Default: True |
git_identity |
Optional | Which identity is used for git name/email when committing changes. Needs to be one of "actions" or "author". Default: actions |
allow_other_plugins |
Optional | Allow other plugins to be installed. By default, we are checking if the plugins are actually prettier plugins. |
name: Prettier Action
description: Automatically runs prettier on all your changes.
author: Conrad Großer <grosserconrad@gmail.com>
inputs:
commit_message:
description: Commit message, will be ignored if used with same_commit
required: false
default: "Prettified Code!"
commit_description:
description: Extended commit message, will be ignored if used with same_commit
required: false
default: ""
same_commit:
description: Update the current commit instead of creating a new one
required: false
default: false
commit_options:
description: Commit options
required: false
push_options:
description: Git push options
required: false
file_pattern:
description: File pattern used for `git add`, can't be used with only_changed or only_changed_pr!
required: false
default: "*"
prettier_options:
description: Options for the `prettier` command
required: false
default: "--write **/*.js"
dry:
description: Running the script in dry mode just shows whether there are files that should be prettified or not
required: false
default: false
no_commit:
description: Can be used to avoid committing the changes (can be combined with dry mode, useful when another workflow steps commits after this commit anyways)
required: false
default: false
prettier_version:
description: Specific version of prettier (by default just use the latest version)
required: false
default: "latest"
working_directory:
description: Specify a directory to cd into before installing prettier and running it
required: false
default: ${{ github.action_path }}
only_changed:
description: Only prettify files changed in the last commit, can't be used with file_pattern!
required: false
default: false
only_changed_pr:
description: Only prettify files changed in the current PR. If specified with only_changed, only_changed will take precedent. Can't be used with file_pattern!
required: false
default: false
prettier_plugins:
description: Install Prettier plugins, i.e. `@prettier/plugin-php @prettier/plugin-other`
required: false
default: ""
github_token:
description: GitHub Token or PAT token used to authenticate against a repository
required: false
default: ${{ github.token }}
clean_node_folder:
description: Remove the node_modules folder before committing changes
required: false
default: true
git_identity:
description: Which identity is used for git name/email when committing changes. Needs to be one of "actions" or "author".
required: false
default: "actions"
allow_other_plugins:
description: Allow other plugins to be installed. By default, we are checking if the plugins are actually prettier plugins.
required: false
default: false
runs:
using: "composite"
steps:
- name: Prettify code!
shell: bash
run: >-
PATH=$GITHUB_ACTION_PATH/node_modules/.bin:$PATH
${{ github.action_path }}/entrypoint.sh
env:
INPUT_COMMIT_MESSAGE: ${{ inputs.commit_message }}
INPUT_COMMIT_DESCRIPTION: ${{ inputs.commit_description }}
INPUT_SAME_COMMIT: ${{ inputs.same_commit }}
INPUT_COMMIT_OPTIONS: ${{ inputs.commit_options }}
INPUT_PUSH_OPTIONS: ${{ inputs.push_options }}
INPUT_FILE_PATTERN: ${{ inputs.file_pattern }}
INPUT_PRETTIER_OPTIONS: ${{ inputs.prettier_options }}
INPUT_DRY: ${{ inputs.dry }}
INPUT_NO_COMMIT: ${{ inputs.no_commit }}
INPUT_PRETTIER_VERSION: ${{ inputs.prettier_version }}
INPUT_ONLY_CHANGED: ${{ inputs.only_changed }}
INPUT_ONLY_CHANGED_PR: ${{ inputs.only_changed_pr }}
INPUT_PRETTIER_PLUGINS: ${{ inputs.prettier_plugins }}
INPUT_WORKING_DIRECTORY: ${{ inputs.working_directory }}
INPUT_GITHUB_TOKEN: ${{ inputs.github_token }}
INPUT_CLEAN_NODE_FOLDER: ${{ inputs.clean_node_folder }}
INPUT_GIT_IDENTITY: ${{ inputs.git_identity }}
INPUT_ALLOW_OTHER_PLUGINS: ${{ inputs.allow_other_plugins }}
branding:
icon: "award"
color: "green"
Action ID: marketplace/peter-evans/duplicati-action
Author: Peter Evans
Publisher: peter-evans
Repository: github.com/peter-evans/duplicati-action
A GitHub action for Duplicati - Store securely encrypted backups in the cloud!
name: 'Duplicati Action'
author: 'Peter Evans'
description: 'A GitHub action for Duplicati - Store securely encrypted backups in the cloud!'
runs:
using: 'docker'
image: 'Dockerfile'
branding:
icon: 'upload-cloud'
color: 'yellow'
Action ID: marketplace/dflook/tofu-version
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/tofu-version
Prints OpenTofu and providers versions
| Name | Required | Description |
|---|---|---|
path |
Optional | The path to the OpenTofu root module directory. Default: . |
workspace |
Optional | The workspace to determine the OpenTofu version for. Default: default |
backend_config |
Optional | List of OpenTofu backend config values, one per line. This will be used to fetch the OpenTofu version set in the cloud workspace if using the `remote` backend. For other backend types, this is used to fetch the version that most recently wrote to the OpenTofu state. |
backend_config_file |
Optional | List of OpenTofu backend config files to use, one per line. Paths should be relative to the GitHub Actions workspace This will be used to fetch the OpenTofu version set in the cloud workspace if using the `remote` backend. For other backend types, this is used to fetch the version that most recently wrote to the OpenTofu state. |
| Name | Description |
|---|---|
terraform |
The Hashicorp Terraform or OpenTofu version that is used by the configuration. |
tofu |
If the action chose a version of OpenTofu, this will be set to the version that is used by the configuration. |
name: tofu-version
description: Prints OpenTofu and providers versions
author: Daniel Flook
inputs:
path:
description: The path to the OpenTofu root module directory.
required: false
default: "."
workspace:
description: The workspace to determine the OpenTofu version for.
required: false
default: "default"
backend_config:
description: |
List of OpenTofu backend config values, one per line.
This will be used to fetch the OpenTofu version set in the cloud workspace if using the `remote` backend.
For other backend types, this is used to fetch the version that most recently wrote to the OpenTofu state.
required: false
default: ""
backend_config_file:
description: |
List of OpenTofu backend config files to use, one per line.
Paths should be relative to the GitHub Actions workspace
This will be used to fetch the OpenTofu version set in the cloud workspace if using the `remote` backend.
For other backend types, this is used to fetch the version that most recently wrote to the OpenTofu state.
required: false
default: ""
outputs:
terraform:
description: The Hashicorp Terraform or OpenTofu version that is used by the configuration.
tofu:
description: If the action chose a version of OpenTofu, this will be set to the version that is used by the configuration.
runs:
env:
OPENTOFU: true
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/version.sh
branding:
icon: globe
color: purple
Action ID: marketplace/amirisback/nutrition-framework-desktop
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/nutrition-framework-desktop
Final Project for Bachelor Degree, Full and Clear Documentation, Have Empty View :books:
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Nutrition Framework'
description: 'Final Project for Bachelor Degree, Full and Clear Documentation, Have Empty View :books:'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/spring-cloud-deploy
Author: Unknown
Publisher: azure
Repository: github.com/azure/spring-cloud-deploy
Deploy applications to Azure Spring Cloud and manage deployments.
| Name | Required | Description |
|---|---|---|
azure-subscription |
Required | Select the Azure Resource Manager subscription for the deployment. |
action |
Required | Action to be performed on Azure Spring Cloud. Default: deploy |
service-name |
Required | Select the Azure Spring Cloud service to which to deploy. |
app-name |
Required | Select the Azure Spring Cloud app to deploy. |
use-staging-deployment |
Required | Automatically select the deployment that's set as Staging at the time the task runs. Default: True |
create-new-deployment |
Optional | Whether to target the deployment that's set as Staging at the time of execution. If unchecked, the 'Deployment Name' setting must be set. |
deployment-name |
Optional | The deployment to which this task will apply. Lowercase letters, - and numbers only; must start with a letter. |
package |
Optional | File path to the package or a folder containing the Spring Cloud app contents. Default: ${{ github.workspace }}/**/*.jar |
environment-variables |
Optional | Edit the app's environment variables. |
jvm-options |
Optional | Edit the app's JVM options. A String containing JVM Options. Example: `-Xms1024m -Xmx2048m` |
runtime-version |
Optional | The runtime on which the app will run. |
dotnetcore-mainentry-path |
Optional | The path to the .NET executable relative to zip root. |
version |
Optional | The runtime on which the app will run. |
# azure spirng cloud action
name: 'Azure Spring Cloud'
description: 'Deploy applications to Azure Spring Cloud and manage deployments.'
inputs:
azure-subscription:
description: 'Select the Azure Resource Manager subscription for the deployment.'
required: true
action:
description: 'Action to be performed on Azure Spring Cloud.'
required: true
default: 'deploy'
service-name:
description: 'Select the Azure Spring Cloud service to which to deploy.'
required: true
app-name:
description: 'Select the Azure Spring Cloud app to deploy.'
required: true
use-staging-deployment:
description: "Automatically select the deployment that's set as Staging at the time the task runs."
required: true
default: true
create-new-deployment:
description: "Whether to target the deployment that's set as Staging at the time of execution. If unchecked, the 'Deployment Name' setting must be set."
required: false
default: false
deployment-name:
description: 'The deployment to which this task will apply. Lowercase letters, - and numbers only; must start with a letter.'
required: false
package:
description: "File path to the package or a folder containing the Spring Cloud app contents."
required: false
default: '${{ github.workspace }}/**/*.jar'
environment-variables:
description: "Edit the app's environment variables."
required: false
jvm-options:
description: "Edit the app's JVM options. A String containing JVM Options. Example: `-Xms1024m -Xmx2048m`"
required: false
runtime-version:
description: 'The runtime on which the app will run.'
required: false
dotnetcore-mainentry-path:
description: 'The path to the .NET executable relative to zip root.'
required: false
version:
description: 'The runtime on which the app will run.'
required: false
branding:
icon: 'icon.svg'
runs:
using: 'node12'
main: 'lib/main.js'
Action ID: marketplace/dflook/terraform-fmt-check
Author: Daniel Flook
Publisher: dflook
Repository: github.com/dflook/terraform-fmt-check
Check that OpenTofu configuration files are in canonical format
| Name | Required | Description |
|---|---|---|
path |
Optional | The path containing Terraform files to check the formatting of. Default: . |
workspace |
Optional | Terraform workspace to inspect when discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
Default: default |
backend_config |
Optional | List of Terraform backend config values, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified. See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details. |
backend_config_file |
Optional | List of Terraform backend config files to use, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified. See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details. Paths should be relative to the GitHub Actions workspace |
| Name | Description |
|---|---|
failure-reason |
When the job outcome is `failure` because the format check failed, this will be set to 'check-failed'. If the job fails for any other reason this will not be set. This can be used with the Actions expression syntax to conditionally run a step when the format check fails. |
name: terraform-fmt-check
description: Check that OpenTofu configuration files are in canonical format
author: Daniel Flook
inputs:
path:
description: The path containing Terraform files to check the formatting of.
required: false
default: "."
workspace:
description: |
Terraform workspace to inspect when discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: "default"
backend_config:
description: |
List of Terraform backend config values, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
required: false
default: ""
backend_config_file:
description: |
List of Terraform backend config files to use, one per line. This is used for discovering the Terraform version to use, if the version is not otherwise specified.
See [dflook/terraform-version](https://github.com/dflook/terraform-github-actions/tree/main/terraform-version#terraform-version-action) for details.
Paths should be relative to the GitHub Actions workspace
required: false
default: ""
outputs:
failure-reason:
description: |
When the job outcome is `failure` because the format check failed, this will be set to 'check-failed'.
If the job fails for any other reason this will not be set.
This can be used with the Actions expression syntax to conditionally run a step when the format check fails.
runs:
using: docker
image: docker://danielflook/terraform-github-actions@sha256:5bdd47d526c0770b32f11f395c953d5bb06aa77ea4cb8818c4cda6167a0018cc
entrypoint: /entrypoints/fmt-check.sh
branding:
icon: globe
color: purple
Action ID: marketplace/amirisback/lucky-spin-wheel
Author: Frogobox
Publisher: amirisback
Repository: github.com/amirisback/lucky-spin-wheel
SDK for anything your problem to make easier developing android apps
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Frogo-Kick-Start-Library'
description: 'SDK for anything your problem to make easier developing android apps'
author: 'Frogobox'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/vn7n24fzkq/typescript-action
Author: Your name or organization here
Publisher: vn7n24fzkq
Repository: github.com/vn7n24fzkq/typescript-action
Provide a description here
| Name | Required | Description |
|---|---|---|
milliseconds |
Required | input description here Default: default value if applicable |
name: 'Your name here'
description: 'Provide a description here'
author: 'Your name or organization here'
inputs:
milliseconds: # change this
required: true
description: 'input description here'
default: 'default value if applicable'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/sobolevn/codecov-action
Author: Ibrahim Ali <@ibrahim0814> & Thomas Hu <@thomasrockhu> | Codecov
Publisher: sobolevn
Repository: github.com/sobolevn/codecov-action
GitHub Action that uploads coverage reports for your repository to codecov.io
| Name | Required | Description |
|---|---|---|
token |
Optional | Repository upload token - get it from codecov.io. Required only for private repositories |
files |
Optional | Comma-separated list of files to upload |
directory |
Optional | Directory to search for coverage reports. |
flags |
Optional | Flag upload to group coverage metrics (e.g. unittests | integration | ui,chrome) |
commit_parent |
Optional | The commit SHA of the parent for which you are uploading coverage. If not present, the parent will be determined using the API of your repository provider. When using the repository providers API, the parent is determined via finding the closest ancestor to the commit. |
dry_run |
Optional | Don't upload files to Codecov |
env_vars |
Optional | Environment variables to tag the upload with (e.g. PYTHON | OS,PYTHON) |
fail_ci_if_error |
Optional | Specify whether or not CI build should fail if Codecov runs into an error during upload |
file |
Optional | Path to coverage file to upload |
functionalities |
Optional | Comma-separated list, see the README for options and their usage |
move_coverage_to_trash |
Optional | Move discovered coverage reports to the trash |
name |
Optional | User defined upload name. Visible in Codecov UI |
override_branch |
Optional | Specify the branch name |
override_build |
Optional | Specify the build number |
override_commit |
Optional | Specify the commit SHA |
override_pr |
Optional | Specify the pull request number |
override_tag |
Optional | Specify the git tag |
os |
Optional | Override the assumed OS. Options are alpine | linux | macos | windows. |
root_dir |
Optional | Used when not in git/hg project to identify project root directory |
slug |
Optional | Specify the slug manually (Enterprise use) |
url |
Optional | Change the upload host (Enterprise use) |
verbose |
Optional | Specify whether the Codecov output should be verbose |
version |
Optional | Specify which version of the Codecov Uploader should be used. Defaults to `latest` |
working-directory |
Optional | Directory in which to execute codecov.sh |
name: 'Codecov'
description: 'GitHub Action that uploads coverage reports for your repository to codecov.io'
author: 'Ibrahim Ali <@ibrahim0814> & Thomas Hu <@thomasrockhu> | Codecov'
inputs:
token:
description: 'Repository upload token - get it from codecov.io. Required only for private repositories'
required: false
files:
description: 'Comma-separated list of files to upload'
required: false
directory:
description: 'Directory to search for coverage reports.'
required: false
flags:
description: 'Flag upload to group coverage metrics (e.g. unittests | integration | ui,chrome)'
required: false
commit_parent:
description: 'The commit SHA of the parent for which you are uploading coverage. If not present, the parent will be determined using the API of your repository provider. When using the repository providers API, the parent is determined via finding the closest ancestor to the commit.'
required: false
dry_run:
description: "Don't upload files to Codecov"
required: false
env_vars:
description: 'Environment variables to tag the upload with (e.g. PYTHON | OS,PYTHON)'
required: false
fail_ci_if_error:
description: 'Specify whether or not CI build should fail if Codecov runs into an error during upload'
required: false
file:
description: 'Path to coverage file to upload'
required: false
functionalities:
description: 'Comma-separated list, see the README for options and their usage'
required: false
move_coverage_to_trash:
description: 'Move discovered coverage reports to the trash'
required: false
name:
description: 'User defined upload name. Visible in Codecov UI'
required: false
override_branch:
description: 'Specify the branch name'
required: false
override_build:
description: 'Specify the build number'
required: false
override_commit:
description: 'Specify the commit SHA'
required: false
override_pr:
description: 'Specify the pull request number'
required: false
override_tag:
description: 'Specify the git tag'
required: false
os:
description: 'Override the assumed OS. Options are alpine | linux | macos | windows.'
required: false
root_dir:
description: 'Used when not in git/hg project to identify project root directory'
required: false
slug:
description: 'Specify the slug manually (Enterprise use)'
required: false
url:
description: 'Change the upload host (Enterprise use)'
required: false
verbose:
description: 'Specify whether the Codecov output should be verbose'
required: false
version:
description: 'Specify which version of the Codecov Uploader should be used. Defaults to `latest`'
required: false
working-directory:
description: 'Directory in which to execute codecov.sh'
required: false
branding:
color: 'red'
icon: 'umbrella'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/wlixcc/up2qn-plus
Author: beimengyeyu <me@beimengyeyu.com>
Publisher: wlixcc
Repository: github.com/wlixcc/up2qn-plus
上传文件到七牛
| Name | Required | Description |
|---|---|---|
bucket |
Required | 七牛云空间名称 |
zone |
Required | 七牛云存储区域 如: 华南 华东 华北 北美 东南亚 |
access_key |
Required | 到七牛云 个人中心 -> 密钥管理 获取 AccessKey,为确保您的空间安全,请使用 github 仓库 secrets 变量 |
secret_key |
Required | 到七牛云 个人中心 -> 密钥管理 获取 SecretKey,为确保您的空间安全,请使用 github 仓库 secrets 变量 |
local_dir |
Required | 本地要上传的文件夹 Default: dist |
local_exclude |
Optional | 本地要排除的文件 |
target_dir |
Optional | 七牛云空间里的文件夹 Default: / |
rpc_timeout |
Optional | 超时时间 Default: 120000 |
name: 'up2qn-plus'
description: '上传文件到七牛'
author: 'beimengyeyu <me@beimengyeyu.com>'
branding:
color: 'green'
icon: 'arrow-up-circle'
inputs:
bucket:
description: '七牛云空间名称'
required: true
zone:
description: '七牛云存储区域 如: 华南 华东 华北 北美 东南亚'
required: true
access_key:
description: '到七牛云 个人中心 -> 密钥管理 获取 AccessKey,为确保您的空间安全,请使用 github 仓库 secrets 变量'
required: true
secret_key:
description: '到七牛云 个人中心 -> 密钥管理 获取 SecretKey,为确保您的空间安全,请使用 github 仓库 secrets 变量'
required: true
local_dir:
description: '本地要上传的文件夹'
required: true
default: 'dist'
local_exclude:
description: '本地要排除的文件'
target_dir:
description: '七牛云空间里的文件夹'
default: '/'
rpc_timeout:
description: '超时时间'
default: '120000'
runs:
using: 'node12'
main: 'dist/index.js'
Action ID: marketplace/julia-actions/julia-buildpkg
Author: David Anthoff
Publisher: julia-actions
Repository: github.com/julia-actions/julia-buildpkg
Run the build step in a Julia package
| Name | Required | Description |
|---|---|---|
project |
Optional | Value passed to the --project flag. The default value is the repository root: "@." Default: @. |
precompile |
Optional | Whether to allow auto-precompilation (via the `JULIA_PKG_PRECOMPILE_AUTO` env var). Options: yes | no. Default value: no. Default: no |
localregistry |
Optional | Add local registries hosted on GitHub. Specified by providing the url (https/ssh) to the repositories as a newline (\n) seperated list. User is responsible for setting up the necessary SSH-Keys to access the repositories if necessary. |
git_cli |
Optional | Determine if Pkg uses the cli git executable (Julia >= 1.7). Might be necessary for more complicated SSH setups. Options: true | false. Default : false Default: false |
ignore-no-cache |
Optional | Whether to ignore if there appears to be no depot caching. Silences an action notice recommending `julia-actions/cache`. Default: false |
name: 'Run Pkg.build'
description: 'Run the build step in a Julia package'
author: 'David Anthoff'
branding:
icon: 'box'
color: 'gray-dark'
inputs:
project:
description: 'Value passed to the --project flag. The default value is the repository root: "@."'
default: '@.'
precompile:
description: 'Whether to allow auto-precompilation (via the `JULIA_PKG_PRECOMPILE_AUTO` env var). Options: yes | no. Default value: no.'
default: 'no'
localregistry:
description: 'Add local registries hosted on GitHub. Specified by providing the url (https/ssh) to the repositories as a newline (\n) seperated list.
User is responsible for setting up the necessary SSH-Keys to access the repositories if necessary.'
default: ''
git_cli:
description: 'Determine if Pkg uses the cli git executable (Julia >= 1.7). Might be necessary for more complicated SSH setups.
Options: true | false. Default : false'
default: 'false'
ignore-no-cache:
description: 'Whether to ignore if there appears to be no depot caching. Silences an action notice recommending `julia-actions/cache`.'
default: 'false'
runs:
using: 'composite'
steps:
- name: Set and export registry flavor preference
run: echo "JULIA_PKG_SERVER_REGISTRY_PREFERENCE=${JULIA_PKG_SERVER_REGISTRY_PREFERENCE:-eager}" >> ${GITHUB_ENV}
shell: bash
- run: |
if "${{ inputs.ignore-no-cache }}" == "false" && !isdir(DEPOT_PATH[1])
println("::notice title=[julia-buildpkg] Caching of the julia depot was not detected ::Consider using `julia-actions/cache` to speed up runs https://github.com/julia-actions/cache To ignore, set input `ignore-no-cache: true` ")
end
import Pkg
# Determine if Pkg uses git-cli executable instead of LibGit2
VERSION >= v"1.7-" && (ENV["JULIA_PKG_USE_CLI_GIT"] = ${{ inputs.git_cli }})
if VERSION < v"1.7-" && ${{ inputs.git_cli }} == true
printstyled("::notice::JULIA_PKG_USE_CLI_GIT requires Julia >= 1.7. Using default LibGit2 git-interface instead! \n"; color = :yellow)
end
if VERSION >= v"1.5-"
if VERSION >= v"1.8-"
# Install the default registries
Pkg.Registry.add()
else
Pkg.Registry.add("General")
end
# If provided add local registries
if !isempty("${{ inputs.localregistry }}")
local_repos = split("${{ inputs.localregistry }}", "\n") .|> string
for repo_url in local_repos
isempty(repo_url) && continue
Pkg.Registry.add(Pkg.RegistrySpec(; url = repo_url))
end
end
end
VERSION >= v"1.1.0-rc1" ? retry(Pkg.build)(verbose=true) : retry(Pkg.build)()
shell: julia --color=yes --project=${{ inputs.project }} {0}
env:
JULIA_PKG_PRECOMPILE_AUTO: "${{ inputs.precompile }}"
GITHUB_TOKEN: ${{ github.token }}
Action ID: marketplace/axel-op/googlejavaformat-action
Author: axel-op
Publisher: axel-op
Repository: github.com/axel-op/googlejavaformat-action
Automatically format Java files using Google Java Style
| Name | Required | Description |
|---|---|---|
args |
Optional | Arguments for the Google Java Format executable Default: --replace |
files |
Optional | Pattern to match the files to be formatted Default: **/*.java |
files-excluded |
Optional | Pattern to match the files to be ignored by this action |
skip-commit |
Optional | By default, this action commits any change made to the files. Set to "true" to skip this commit. Default: false |
release-name |
Optional | Release name of Google Java Format to use. Examples: 1.7, v1.24.0... Leave empty to use the latest release compatible with your JDK. |
github-token |
Optional | If provided, will be used to authenticate the calls to the GitHub API. |
commit-message |
Optional | This message will be used for commits made by this action |
name: "Google Java Format"
description: "Automatically format Java files using Google Java Style"
author: "axel-op"
branding:
color: "red"
icon: "align-right"
inputs:
args:
description: "Arguments for the Google Java Format executable"
required: false
default: "--replace"
files:
description: "Pattern to match the files to be formatted"
required: false
default: "**/*.java"
files-excluded:
description: "Pattern to match the files to be ignored by this action"
required: false
skip-commit:
description: "By default, this action commits any change made to the files. Set to \"true\" to skip this commit."
required: false
default: "false"
release-name:
description: "Release name of Google Java Format to use. Examples: 1.7, v1.24.0... Leave empty to use the latest release compatible with your JDK."
required: false
github-token:
description: "If provided, will be used to authenticate the calls to the GitHub API."
required: false
commit-message:
description: "This message will be used for commits made by this action"
required: false
runs:
using: "node20"
main: "dist/index.js"
Action ID: marketplace/amirisback/android-hilt-dependency-injection
Author: Muhammad Faisal Amir
Publisher: amirisback
Repository: github.com/amirisback/android-hilt-dependency-injection
Full and Clear Documentation
| Name | Required | Description |
|---|---|---|
myInput |
Optional | Input to use Default: world |
name: 'Android-Research-Tech'
description: 'Full and Clear Documentation'
author: 'Muhammad Faisal Amir'
branding:
icon: archive
color: green
inputs:
myInput:
description: 'Input to use'
required: false
default: 'world'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.myInput }}
Action ID: marketplace/azure/appservice-build
Author: Unknown
Publisher: azure
Repository: github.com/azure/appservice-build
Build an Azure Web App on Linux using the Oryx build system.
| Name | Required | Description |
|---|---|---|
source-directory |
Optional | 'Relative path (within the repository) to the source directory of the project you want to build; if no value is provided for this, the root of the repository ("GITHUB_WORKSPACE" environment variable) will be built.' |
platform |
Optional | 'Programming platform used to build the web app; if no value is provided, Oryx will determine the platform to build with. Supported values: dotnet, golang, java, nodejs, php, python, ruby' |
platform-version |
Optional | 'Version of the programming platform used to build the web app; if no value is provided, Oryx will determine the version needed to build the repository.' |
output-directory |
Optional | The directory where the build output will be copied to. |
name: 'App Service Web App Build Action'
description: 'Build an Azure Web App on Linux using the Oryx build system.'
inputs:
source-directory:
description: |
'Relative path (within the repository) to the source directory of the project you want to build; if no value is
provided for this, the root of the repository ("GITHUB_WORKSPACE" environment variable) will be built.'
required: false
platform:
description: |
'Programming platform used to build the web app; if no value is provided, Oryx will determine the platform to
build with. Supported values: dotnet, golang, java, nodejs, php, python, ruby'
required: false
platform-version:
description: |
'Version of the programming platform used to build the web app; if no value is provided, Oryx will determine the
version needed to build the repository.'
required: false
output-directory:
description: 'The directory where the build output will be copied to.'
required: false
branding:
icon: 'login.svg'
color: 'blue'
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.source-directory }}
- ${{ inputs.platform }}
- ${{ inputs.platform-version }}
- ${{ inputs.output-directory }}
Action ID: marketplace/actions/setup-python
Author: GitHub
Publisher: actions
Repository: github.com/actions/setup-python
Set up a specific version of Python and add the command-line tools to the PATH.
| Name | Required | Description |
|---|---|---|
python-version |
Optional | Version range or exact version of Python or PyPy to use, using SemVer's version range syntax. Reads from .python-version if unset. |
python-version-file |
Optional | File containing the Python version to use. Example: .python-version |
cache |
Optional | Used to specify a package manager for caching in the default directory. Supported values: pip, pipenv, poetry. |
architecture |
Optional | The target architecture (x86, x64, arm64) of the Python or PyPy interpreter. |
check-latest |
Optional | Set this option if you want the action to check for the latest available version that satisfies the version spec. |
token |
Optional | The token used to authenticate when fetching Python distributions from https://github.com/actions/python-versions. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting. Default: ${{ github.server_url == 'https://github.com' && github.token || '' }} |
cache-dependency-path |
Optional | Used to specify the path to dependency files. Supports wildcards or a list of file names for caching multiple dependencies. |
update-environment |
Optional | Set this option if you want the action to update environment variables. Default: True |
allow-prereleases |
Optional | When 'true', a version range passed to 'python-version' input will match prerelease versions if no GA versions are found. Only 'x.y' version range is supported for CPython. |
freethreaded |
Optional | When 'true', use the freethreaded version of Python. |
pip-version |
Optional | Used to specify the version of pip to install with the Python. Supported format: major[.minor][.patch]. |
pip-install |
Optional | Used to specify the packages to install with pip after setting up Python. Can be a requirements file or package names. |
| Name | Description |
|---|---|
python-version |
The installed Python or PyPy version. Useful when given a version range as input. |
cache-hit |
A boolean value to indicate a cache entry was found |
python-path |
The absolute path to the Python or PyPy executable. |
---
name: "Setup Python"
description: "Set up a specific version of Python and add the command-line tools to the PATH."
author: "GitHub"
inputs:
python-version:
description: "Version range or exact version of Python or PyPy to use, using SemVer's version range syntax. Reads from .python-version if unset."
python-version-file:
description: "File containing the Python version to use. Example: .python-version"
cache:
description: "Used to specify a package manager for caching in the default directory. Supported values: pip, pipenv, poetry."
required: false
architecture:
description: "The target architecture (x86, x64, arm64) of the Python or PyPy interpreter."
check-latest:
description: "Set this option if you want the action to check for the latest available version that satisfies the version spec."
default: false
token:
description: "The token used to authenticate when fetching Python distributions from https://github.com/actions/python-versions. When running this action on github.com, the default value is sufficient. When running on GHES, you can pass a personal access token for github.com if you are experiencing rate limiting."
default: ${{ github.server_url == 'https://github.com' && github.token || '' }}
cache-dependency-path:
description: "Used to specify the path to dependency files. Supports wildcards or a list of file names for caching multiple dependencies."
update-environment:
description: "Set this option if you want the action to update environment variables."
default: true
allow-prereleases:
description: "When 'true', a version range passed to 'python-version' input will match prerelease versions if no GA versions are found. Only 'x.y' version range is supported for CPython."
default: false
freethreaded:
description: "When 'true', use the freethreaded version of Python."
default: false
pip-version:
description: "Used to specify the version of pip to install with the Python. Supported format: major[.minor][.patch]."
pip-install:
description: "Used to specify the packages to install with pip after setting up Python. Can be a requirements file or package names."
outputs:
python-version:
description: "The installed Python or PyPy version. Useful when given a version range as input."
cache-hit:
description: "A boolean value to indicate a cache entry was found"
python-path:
description: "The absolute path to the Python or PyPy executable."
runs:
using: 'node24'
main: 'dist/setup/index.js'
post: 'dist/cache-save/index.js'
post-if: success()
branding:
icon: 'code'
color: 'yellow'
Action ID: marketplace/vsoch/license-updater
Author: Unknown
Publisher: vsoch
Repository: github.com/vsoch/license-updater
Check for updates to your license files.
| Name | Required | Description |
|---|---|---|
updaters |
Optional | Choose named updaters to run (comma separated value, no spaces) |
path |
Required | path to file or directory to check |
args |
Optional | additional args to provide to 'detect' or 'update' commands |
version |
Optional | release of updater to use |
allow_fail |
Optional | allow a failure (only relevant if pull_request is false) |
name: "license-updater"
description: "Check for updates to your license files."
branding:
icon: 'activity'
color: 'green'
inputs:
updaters:
description: Choose named updaters to run (comma separated value, no spaces)
required: false
path:
description: path to file or directory to check
required: true
args:
description: additional args to provide to 'detect' or 'update' commands
required: false
version:
description: release of updater to use
required: false
allow_fail:
description: allow a failure (only relevant if pull_request is false)
default: false
runs:
using: "composite"
steps:
- name: Install Action Updater
env:
version: ${{ inputs.version }}
run: |
if [[ "${version}" == "" ]]; then
pip install git+https://github.com/vsoch/license-updater.git@main
else
pip install license-updater@${version}
fi
shell: bash
- name: Detect Action Updates
env:
path: ${{ inputs.path }}
args: ${{ inputs.args }}
updaters: ${{ inputs.updaters }}
run: |
# If pwd is provided, ensure we get the entire path
if [[ "${path}" == "." ]]; then
path=$(pwd)
echo "path=${path}" >> ${GITHUB_ENV}
fi
cmd="license-updater"
cmd="${cmd} detect"
if [[ "${updaters}" != "" ]]; then
cmd="${cmd} --updaters ${updaters}"
fi
cmd="${cmd} ${path} ${args}"
printf "${cmd}\n"
$cmd && retval=0 || retval=1
echo "retval=${retval}" >> $GITHUB_ENV
shell: bash
- name: Exit on failure (updates)
env:
allow_fail: ${{ inputs.allow_fail }}
retval: ${{ env.retval }}
run: |
if [[ "${retval}" != "0" ]] && [[ "${allow_fail}" == "false" ]]; then
printf "Detect found changes, and allow_fail is false."
exit 1
elif [[ "${retval}" != "0" ]] && [[ "${allow_fail}" == "true" ]]; then
printf "Detect found changes, and allow_fail is true."
exit 0
fi
printf "Return value is ${retval}, no changes needed!\n"
shell: bash
Action ID: marketplace/actions/upload-release-asset
Author: GitHub
Publisher: actions
Repository: github.com/actions/upload-release-asset
Upload a release asset to an existing release in your repository
| Name | Required | Description |
|---|---|---|
upload_url |
Required | The URL for uploading assets to the release |
asset_path |
Required | The path to the asset you want to upload |
asset_name |
Required | The name of the asset you want to upload |
asset_content_type |
Required | The content-type of the asset you want to upload. See the supported Media Types here: https://www.iana.org/assignments/media-types/media-types.xhtml for more information |
| Name | Description |
|---|---|
browser_download_url |
The URL users can navigate to in order to download the uploaded asset |
name: 'Upload a Release Asset'
description: 'Upload a release asset to an existing release in your repository'
author: 'GitHub'
inputs:
upload_url:
description: 'The URL for uploading assets to the release'
required: true
asset_path:
description: 'The path to the asset you want to upload'
required: true
asset_name:
description: 'The name of the asset you want to upload'
required: true
asset_content_type:
description: 'The content-type of the asset you want to upload. See the supported Media Types here: https://www.iana.org/assignments/media-types/media-types.xhtml for more information'
required: true
outputs:
browser_download_url:
description: 'The URL users can navigate to in order to download the uploaded asset'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
icon: 'package'
color: 'gray-dark'
Action ID: marketplace/azure/AppConfiguration-Sync
Author: Unknown
Publisher: azure
Repository: github.com/azure/AppConfiguration-Sync
DEPRECATED: Use Azure CLI action instead. Follow [these instructions](https://aka.ms/appconfig/githubactions) to set up.
| Name | Required | Description |
|---|---|---|
configurationFile |
Required | Path to the configuration file in the repo, relative to the repo root. Also supports glob patterns and multiple files |
format |
Required | Format of the configuration file. Valid values are: json, yaml, properties |
connectionString |
Required | Connection string for the App Configuration instance |
separator |
Required | Separator used when flattening the configuration file to key-value pairs |
strict |
Optional | Specifies whether to use a strict sync which will make the App Configuration instance exactly match the configuration file (deleting key-values not in the configuration file). Defaults to false |
prefix |
Optional | Prefix that will be added to the front of the keys |
label |
Optional | Label to use when setting the key-value pairs. If not specified, a null label will be used |
depth |
Optional | Max depth (positive number) for flattening the configuration file |
tags |
Optional | Stringified form of a JSON object with the following shape: { [propertyName: string]: string; } |
contentType |
Optional | Content type associated with the values |
name: 'Azure App Configuration Sync (Deprecated)'
description: 'DEPRECATED: Use Azure CLI action instead. Follow [these instructions](https://aka.ms/appconfig/githubactions) to set up.'
inputs:
configurationFile:
description: 'Path to the configuration file in the repo, relative to the repo root. Also supports glob patterns and multiple files'
required: true
format:
description: 'Format of the configuration file. Valid values are: json, yaml, properties'
required: true
connectionString:
description: 'Connection string for the App Configuration instance'
required: true
separator:
description: 'Separator used when flattening the configuration file to key-value pairs'
required: true
strict:
description: 'Specifies whether to use a strict sync which will make the App Configuration instance exactly match the configuration file (deleting key-values not in the configuration file). Defaults to false'
required: false
prefix:
description: 'Prefix that will be added to the front of the keys'
required: false
label:
description: 'Label to use when setting the key-value pairs. If not specified, a null label will be used'
required: false
depth:
description: 'Max depth (positive number) for flattening the configuration file'
required: false
tags:
description: 'Stringified form of a JSON object with the following shape: { [propertyName: string]: string; }'
required: false
contentType:
description: 'Content type associated with the values'
required: false
runs:
using: 'node20'
main: 'lib/index.js'
Action ID: marketplace/mheap/github-action-pull-request-milestone-toolkit
Author: Unknown
Publisher: mheap
Repository: github.com/mheap/github-action-pull-request-milestone-toolkit
Congratulate people when they hit a certain number of merged pull requests
name: Pull Request Milestone
description: Congratulate people when they hit a certain number of merged pull requests
runs:
using: docker
image: Dockerfile
branding:
icon: star
color: yellow
Action ID: marketplace/yeslayla/build-godot-action
Author: yeslayla
Publisher: yeslayla
Repository: github.com/yeslayla/build-godot-action
Build a Godot project for multiple platforms
| Name | Required | Description |
|---|---|---|
name |
Required | Name of the exported binary |
preset |
Required | Name of the preset in `export_presets.cfg` to use |
subdirectory |
Optional | Optional name of the subdirectory to put exported project in |
package |
Optional | Set true to output an artifact zip file |
projectDir |
Optional | Location of Godot project in repository Default: . |
debugMode |
Optional | Whether or not to use `--export-debug` |
name: "Build Godot"
description: "Build a Godot project for multiple platforms"
author: yeslayla
inputs:
name:
description: 'Name of the exported binary'
required: true
preset:
description: 'Name of the preset in `export_presets.cfg` to use'
required: true
subdirectory:
description: 'Optional name of the subdirectory to put exported project in'
default: ""
package:
description: 'Set true to output an artifact zip file'
default: false
projectDir:
description: 'Location of Godot project in repository'
default: "."
debugMode:
description: 'Whether or not to use `--export-debug`'
default: false
runs:
using: docker
image: Dockerfile
args:
- ${{ inputs.name }}
- ${{ inputs.preset }}
- ${{ inputs.subdirectory }}
- ${{ inputs.package }}
- ${{ inputs.projectDir }}
- ${{ inputs.debugMode }}
branding:
icon: loader
color: blue