Compare commits

...

22 Commits
19.8 ... 19

Author SHA1 Message Date
George Joseph
7b5fff3b20 .github: Minor tweak to Asterisk Releaser 2023-10-09 08:55:18 -06:00
George Joseph
334f4b01bb .github: Fix cherry-pick reminder issues 2023-10-09 08:53:56 -06:00
George Joseph
b9fdfaf0cb .github: Update workflow-application-token-action to v2 2023-10-09 08:52:25 -06:00
George Joseph
2ce533ca84 .github: Fix job prereqs in PROpenedUpdated 2023-10-09 08:48:05 -06:00
George Joseph
ece20bf69b .github: Block PR tests until approved 2023-10-09 08:47:51 -06:00
George Joseph
a8b01ed8ec .github: Update AsteriskReleaser for security releases 2023-10-09 08:46:53 -06:00
George Joseph
1379e048da .github: Minor tweak to Asterisk Releaser 2023-10-09 08:38:25 -06:00
George Joseph
c10c1ca4e2 ari-stubs: Fix more local anchor references
Also allow CreateDocs job to be run manually with default branches.
2023-09-05 13:36:14 -06:00
George Joseph
d6e764c496 ari-stubs: Fix more local anchor references
Also allow CreateDocs job to be run manually with default branches.
2023-09-05 13:05:44 -06:00
George Joseph
64d67349b9 ari-stubs: Fix broken documentation anchors
All of the links that reference page anchors with capital letters in
the ids (#Something) have been changed to lower case to match the
anchors that are generated by mkdocs.
2023-09-05 09:55:46 -06:00
George Joseph
e292c66b1a alembic: Fix quoting of the 100rel column
Add quoting around the ps_endpoints 100rel column in the ALTER
statements.  Although alembic doesn't complain when generating
sql statements, postgresql does (rightly so).

Resolves: #274
2023-08-29 11:10:02 +00:00
George Joseph
fb1eee2fef .github: Use generic releaser 2023-08-15 13:13:08 -06:00
George Joseph
cf116ea187 .github: Suppress cherry-pick reminder for some situations
In PROpenedOrUpdated, the cherry-pick reminder will now be
suppressed if there are already valid 'cherry-pick-to' comments
in the PR or the PR contained a 'cherry-pick-to: none' comment.
2023-07-11 06:50:36 -06:00
Sean Bright
70c551e3bb apply_patches: Use globbing instead of file/sort.
This accomplishes the same thing as a `find ... | sort` but with the
added benefit of clarity and avoiding a call to a subshell.

Additionally drop the -s option from call to patch as it is not POSIX.
2023-07-07 15:11:53 +00:00
George Joseph
fcbeaba5ea bundled_pjproject: Backport 2 SSL patches from upstream
* Fix double free of ossock->ossl_ctx in case of errors
https://github.com/pjsip/pjproject/commit/863629bc65d6

* free SSL context and reset context pointer when setting the cipher
  list fails
https://github.com/pjsip/pjproject/commit/0fb32cd4c0b2

Resolves: #194
2023-07-06 18:28:38 +00:00
George Joseph
898014ab7f bundled_pjproject: Backport security fixes from pjproject 2.13.1
Merge-pull-request-from-GHSA-9pfh-r8x4-w26w.patch
Merge-pull-request-from-GHSA-cxwq-5g9x-x7fr.patch
Locking-fix-so-that-SSL_shutdown-and-SSL_write-are-n.patch
Don-t-call-SSL_shutdown-when-receiving-SSL_ERROR_SYS.patch

Resolves: #188
2023-07-06 15:21:39 +00:00
George Joseph
f45fd46190 test_statis_endpoints: Fix channel_messages test again 2023-07-06 09:09:52 -06:00
George Joseph
a5c4f3e567 test_stasis_endpoints.c: Make channel_messages more stable
The channel_messages test was assuming that stasis would return
messages in a specific order.  This is an incorrect assumption as
message ordering was never guaranteed.  This was causing the test
to fail occasionally.  We now test all the messages for the
required message types instead of testing one by one.

Resolves: #158
2023-07-06 09:09:41 -06:00
George Joseph
fcaa1ba181 apply_patches: Sort patch list before applying
The apply_patches script wasn't sorting the list of patches in
the "patches" directory before applying them. This left the list
in an indeterminate order. In most cases, the list is actually
sorted but rarely, they can be out of order and cause dependent
patches to fail to apply.

We now sort the list but the "sort" program wasn't in the
configure scripts so we needed to add that and regenerate
the scripts as well.

Resolves: #193
2023-07-06 14:04:06 +00:00
George Joseph
29570120f2 .github: Add workflow to this branch 2023-07-05 08:04:44 -06:00
George Joseph
1d6de5d77b rest-api: Updates for new documentation site
The new documentation site uses traditional markdown instead
of the Confluence flavored version.  This required changes in
the mustache templates and the python that generates the files.
2023-06-27 08:35:35 -06:00
George Joseph
4a250c8834 res_pjsip_transport_websocket: Add remote port to transport
When Asterisk receives a new websocket conenction, it creates a new
pjsip transport for it and copies connection data into it.  The
transport manager then uses the remote IP address and port on the
transport to create a monitor for each connection.  However, the
remote port wasn't being copied, only the IP address which meant
that the transport manager was creating only 1 monitoring entry for
all websocket connections from the same IP address. Therefore, if
one of those connections failed, it deleted the transport taking
all the the connections from that same IP address with it.

* We now copy the remote port into the created transport and the
  transport manager behaves correctly.

ASTERISK-30369

Change-Id: Ib506d40897ea6286455ac0be4dfbb0ed43b727e1
2023-01-03 06:49:19 -06:00
32 changed files with 11508 additions and 11832 deletions

87
.github/ISSUE_TEMPLATE/bug-report.yml vendored Normal file
View File

@@ -0,0 +1,87 @@
name: Bug
description: File a bug report
title: "[bug]: "
labels: ["bug", "triage"]
#assignees:
# - octocat
body:
- type: markdown
attributes:
value: |
Thanks for creating a report! The issue has entered the triage process. That means the issue will wait in this status until a Bug Marshal has an opportunity to review the issue. Once the issue has been reviewed you will receive comments regarding the next steps towards resolution. Please note that log messages and other files should not be sent to the Sangoma Asterisk Team unless explicitly asked for. All files should be placed on this issue in a sanitized fashion as needed.
A good first step is for you to review the Asterisk Issue Guidelines if you haven't already. The guidelines detail what is expected from an Asterisk issue report.
Then, if you are submitting a patch, please review the Patch Contribution Process.
Please note that once your issue enters an open state it has been accepted. As Asterisk is an open source project there is no guarantee or timeframe on when your issue will be looked into. If you need expedient resolution you will need to find and pay a suitable developer. Asking for an update on your issue will not yield any progress on it and will not result in a response. All updates are posted to the issue when they occur.
Please note that by submitting data, code, or documentation to Sangoma through GitHub, you accept the Terms of Use present at
https://www.asterisk.org/terms-of-use/.
Thanks for taking the time to fill out this bug report!
- type: dropdown
id: severity
attributes:
label: Severity
options:
- Trivial
- Minor
- Major
- Critical
- Blocker
validations:
required: true
- type: input
id: versions
attributes:
label: Versions
description: Enter one or more versions separated by commas.
validations:
required: true
- type: input
id: components
attributes:
label: Components/Modules
description: Enter one or more components or modules separated by commas.
validations:
required: true
- type: textarea
id: environment
attributes:
label: Operating Environment
description: OS, Disribution, Version, etc.
validations:
required: true
- type: dropdown
id: frequency
attributes:
label: Frequency of Occurrence
options:
- "Never"
- "One Time"
- "Occasional"
- "Frequent"
- "Constant"
- type: textarea
id: description
attributes:
label: Issue Description
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: markdown
attributes:
value: |
[Asterisk Issue Guidelines](https://wiki.asterisk.org/wiki/display/AST/Asterisk+Issue+Guidelines)
- type: checkboxes
id: guidelines
attributes:
label: Asterisk Issue Guidelines
options:
- label: Yes, I have read the Asterisk Issue Guidelines
required: true

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Asterisk Community Support
url: https://community.asterisk.org
about: Please ask and answer questions here.
- name: Feature Requests
url: https://github.com/asterisk/asterisk-feature-requests/issues
about: Please submit feature requests here.

27
.github/ISSUE_TEMPLATE/improvement.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Improvement
description: Submit an improvement to existing functionality
title: "[improvement]: "
labels: ["improvement", "triage"]
body:
- type: markdown
attributes:
value: |
Thanks for creating a report! The issue has entered the triage process. That means the issue will wait in this status until a Bug Marshal has an opportunity to review the issue. Once the issue has been reviewed you will receive comments regarding the next steps towards resolution. Please note that log messages and other files should not be sent to the Sangoma Asterisk Team unless explicitly asked for. All files should be placed on this issue in a sanitized fashion as needed.
A good first step is for you to review the Asterisk Issue Guidelines if you haven't already. The guidelines detail what is expected from an Asterisk issue report.
Then, if you are submitting a patch, please review the Patch Contribution Process.
Please note that once your issue enters an open state it has been accepted. As Asterisk is an open source project there is no guarantee or timeframe on when your issue will be looked into. If you need expedient resolution you will need to find and pay a suitable developer. Asking for an update on your issue will not yield any progress on it and will not result in a response. All updates are posted to the issue when they occur.
Please note that by submitting data, code, or documentation to Sangoma through GitHub, you accept the Terms of Use present at
https://www.asterisk.org/terms-of-use/.
Thanks for taking the time to fill out this bug report!
- type: textarea
id: description
attributes:
label: Improvement Description
description: Describe the improvement in as much detail as possible
validations:
required: true

27
.github/ISSUE_TEMPLATE/new-feature.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: New Feature Submission
description: Submit a New Feature
title: "[new-feature]: "
labels: ["new-feature", "triage"]
body:
- type: markdown
attributes:
value: |
Thanks for creating a report! The issue has entered the triage process. That means the issue will wait in this status until a Bug Marshal has an opportunity to review the issue. Once the issue has been reviewed you will receive comments regarding the next steps towards resolution. Please note that log messages and other files should not be sent to the Sangoma Asterisk Team unless explicitly asked for. All files should be placed on this issue in a sanitized fashion as needed.
A good first step is for you to review the Asterisk Issue Guidelines if you haven't already. The guidelines detail what is expected from an Asterisk issue report.
Then, if you are submitting a patch, please review the Patch Contribution Process.
Please note that once your issue enters an open state it has been accepted. As Asterisk is an open source project there is no guarantee or timeframe on when your issue will be looked into. If you need expedient resolution you will need to find and pay a suitable developer. Asking for an update on your issue will not yield any progress on it and will not result in a response. All updates are posted to the issue when they occur.
Please note that by submitting data, code, or documentation to Sangoma through GitHub, you accept the Terms of Use present at
https://www.asterisk.org/terms-of-use/.
Thanks for taking the time to fill out this bug report!
- type: textarea
id: description
attributes:
label: Feature Description
description: Describe the new feature in as much detail as possible
validations:
required: true

167
.github/workflows/CherryPickTest.yml vendored Normal file
View File

@@ -0,0 +1,167 @@
name: CherryPickTest
run-name: "Cherry-Pick Tests for PR ${{github.event.number}}"
on:
pull_request_target:
types: [ labeled ]
concurrency:
group: ${{github.workflow}}-${{github.event.number}}
cancel-in-progress: true
env:
PR_NUMBER: ${{ github.event.number }}
MODULES_BLACKLIST: ${{ vars.GATETEST_MODULES_BLACKLIST }} ${{ vars.UNITTEST_MODULES_BLACKLIST }}
jobs:
IdentifyBranches:
name: IdentifyBranches
if: ${{ github.event.label.name == vars.CHERRY_PICK_TEST_LABEL }}
outputs:
branches: ${{ steps.getbranches.outputs.branches }}
branch_count: ${{ steps.getbranches.outputs.branch_count }}
runs-on: ubuntu-latest
steps:
- name: Remove Trigger Label, Add InProgress Label
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.CHERRY_PICK_TEST_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_CHECKS_PASSED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_CHECKS_FAILED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_GATES_PASSED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_GATES_FAILED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
- name: Get cherry-pick branches
uses: asterisk/asterisk-ci-actions/GetCherryPickBranchesFromPR@main
id: getbranches
with:
repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
cherry_pick_regex: ${{vars.CHERRY_PICK_REGEX}}
github_token: ${{secrets.GITHUB_TOKEN}}
- name: Check Branch Count
if: ${{ steps.getbranches.outputs.branch_count > 0 }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.CHERRY_PICK_TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
CherryPickUnitTestMatrix:
needs: [ IdentifyBranches ]
if: ${{ needs.IdentifyBranches.outputs.branch_count > 0 && ( success() || failure() ) }}
continue-on-error: false
strategy:
fail-fast: false
matrix:
branch: ${{ fromJSON(needs.IdentifyBranches.outputs.branches) }}
runs-on: ubuntu-latest
steps:
- name: Run Unit Tests for branch ${{matrix.branch}}
uses: asterisk/asterisk-ci-actions/AsteriskUnitComposite@main
with:
asterisk_repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
base_branch: ${{matrix.branch}}
is_cherry_pick: true
modules_blacklist: ${{env.MODULES_BLACKLIST}}
github_token: ${{secrets.GITHUB_TOKEN}}
unittest_command: ${{vars.UNITTEST_COMMAND}}
CherryPickUnitTests:
needs: [ IdentifyBranches, CherryPickUnitTestMatrix ]
if: ${{ needs.IdentifyBranches.outputs.branch_count > 0 && ( success() || failure() ) }}
runs-on: ubuntu-latest
steps:
- name: Check unit test matrix status
env:
RESULT: ${{needs.CherryPickUnitTestMatrix.result}}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
case $RESULT in
success)
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.CHERRY_PICK_CHECKS_PASSED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::notice::All tests passed"
exit 0
;;
skipped)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.CHERRY_PICK_TESTING_IN_PROGRESS}} \
--add-label ${{vars.CHERRY_PICK_CHECKS_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::notice::Unit tests were skipped because of an earlier failure"
exit 1
;;
*)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.CHERRY_PICK_TESTING_IN_PROGRESS}} \
--add-label ${{vars.CHERRY_PICK_CHECKS_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::error::One or more tests failed ($RESULT)"
exit 1
esac
CherryPickGateTestMatrix:
needs: [ IdentifyBranches, CherryPickUnitTests ]
if: ${{ success() }}
continue-on-error: false
strategy:
fail-fast: false
matrix:
branch: ${{ fromJSON(needs.IdentifyBranches.outputs.branches) }}
group: ${{ fromJSON(vars.GATETEST_LIST) }}
runs-on: ubuntu-latest
steps:
- name: Run Gate Tests for ${{ matrix.group }}-${{matrix.branch}}
uses: asterisk/asterisk-ci-actions/AsteriskGateComposite@main
with:
test_type: Gate
asterisk_repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
base_branch: ${{matrix.branch}}
is_cherry_pick: true
modules_blacklist: ${{env.MODULES_BLACKLIST}}
github_token: ${{secrets.GITHUB_TOKEN}}
testsuite_repo: ${{vars.TESTSUITE_REPO}}
gatetest_group: ${{matrix.group}}
gatetest_commands: ${{vars.GATETEST_COMMANDS}}
CherryPickGateTests:
needs: [ IdentifyBranches, CherryPickGateTestMatrix ]
if: ${{ success() || failure() }}
runs-on: ubuntu-latest
steps:
- name: Check test matrix status
env:
RESULT: ${{needs.CherryPickGateTestMatrix.result}}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.CHERRY_PICK_TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
case $RESULT in
success)
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.CHERRY_PICK_GATES_PASSED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::notice::All Testsuite tests passed"
exit 0
;;
skipped)
echo "::error::Testsuite tests were skipped because of an earlier failure"
exit 1
;;
*)
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.CHERRY_PICK_GATES_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::error::One or more Testsuite tests failed ($RESULT)"
exit 1
esac

123
.github/workflows/CreateDocs.yml vendored Normal file
View File

@@ -0,0 +1,123 @@
name: CreateDocs
on:
workflow_dispatch:
inputs:
branches:
description: "JSON array of branches: ['18','20'] (no spaces)"
required: false
type: string
schedule:
# Times are UTC
- cron: '0 04 * * *'
env:
ASTERISK_REPO: ${{ github.repository }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEFAULT_BRANCHES: ${{ vars.WIKIDOC_BRANCHES }}
INPUT_BRANCHES: ${{ inputs.branches }}
jobs:
CreateDocsDebug:
runs-on: ubuntu-latest
outputs:
manual_branches: ${{ steps.setup.outputs.manual_branches }}
steps:
- name: setup
run: |
MANUAL_BRANCHES="$INPUT_BRANCHES"
[ -z "$MANUAL_BRANCHES" ] && MANUAL_BRANCHES="$DEFAULT_BRANCHES" || :
echo "manual_branches=${MANUAL_BRANCHES}"
echo "manual_branches=${MANUAL_BRANCHES}" >>${GITHUB_OUTPUT}
exit 0
- name: DumpEnvironment
uses: asterisk/asterisk-ci-actions/DumpEnvironmentAction@main
with:
action-inputs: ${{toJSON(inputs)}}
action-vars: ${{ toJSON(steps.setup.outputs) }}
CreateDocsScheduledMatrix:
needs: [ CreateDocsDebug ]
if: ${{github.event_name == 'schedule' && fromJSON(vars.WIKIDOCS_ENABLE) == true }}
continue-on-error: false
strategy:
fail-fast: false
matrix:
branch: ${{ fromJSON(vars.WIKIDOC_BRANCHES) }}
runs-on: ubuntu-latest
steps:
- name: CreateDocs for ${{matrix.branch}}
uses: asterisk/asterisk-ci-actions/CreateAsteriskDocsComposite@main
with:
asterisk_repo: ${{env.ASTERISK_REPO}}
base_branch: ${{matrix.branch}}
docs_dir: docs_dir/${{matrix.branch}}
github_token: ${{secrets.GITHUB_TOKEN}}
CreateDocsScheduled:
needs: [ CreateDocsScheduledMatrix ]
if: ${{ success() || failure() }}
runs-on: ubuntu-latest
steps:
- name: Check CreateDocsScheduledMatrix status
env:
RESULT: ${{needs.CreateDocsScheduledMatrix.result}}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
case $RESULT in
success)
echo "::notice::Docs created"
exit 0
;;
skipped)
echo "::notice::Skipped"
exit 1
;;
*)
echo "::error::One or CreateDocs failed ($RESULT)"
exit 1
esac
CreateDocsManualMatrix:
needs: [ CreateDocsDebug ]
if: ${{github.event_name == 'workflow_dispatch'}}
continue-on-error: false
strategy:
fail-fast: false
matrix:
branch: ${{ fromJSON(vars.WIKIDOC_MANUAL_BRANCHES) }}
runs-on: ubuntu-latest
steps:
- name: CreateDocs for ${{matrix.branch}}
uses: asterisk/asterisk-ci-actions/CreateAsteriskDocsComposite@main
with:
asterisk_repo: ${{env.ASTERISK_REPO}}
base_branch: ${{matrix.branch}}
docs_dir: docs_dir/${{matrix.branch}}
github_token: ${{secrets.GITHUB_TOKEN}}
CreateDocsManual:
needs: [ CreateDocsManualMatrix ]
if: ${{ success() || failure() }}
runs-on: ubuntu-latest
steps:
- name: Check CreateDocsManualMatrix status
env:
RESULT: ${{needs.CreateDocsManualMatrix.result}}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
case $RESULT in
success)
echo "::notice::Docs created"
exit 0
;;
skipped)
echo "::notice::Skipped"
exit 1
;;
*)
echo "::error::One or CreateDocs failed ($RESULT)"
exit 1
esac

15
.github/workflows/IssueOpened.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: Issue Opened
run-name: "Issue ${{github.event.number}} ${{github.event.action}} by ${{github.actor}}"
on:
issues:
types: opened
jobs:
triage:
runs-on: ubuntu-latest
steps:
- name: initial labeling
uses: andymckay/labeler@master
with:
add-labels: "triage"
ignore-if-labeled: true

190
.github/workflows/MergeApproved.yml vendored Normal file
View File

@@ -0,0 +1,190 @@
name: MergeApproved
run-name: "Merge Approved for PR ${{github.event.number}}"
on:
pull_request_target:
types: [labeled]
env:
PR_NUMBER: ${{ github.event.number }}
BASE_BRANCH: ${{github.event.pull_request.base.ref}}
MODULES_BLACKLIST: ${{ vars.GATETEST_MODULES_BLACKLIST }} ${{ vars.UNITTEST_MODULES_BLACKLIST }}
FORCE: ${{ endsWith(github.event.label.name, '-force') }}
jobs:
IdentifyBranches:
if: contains(fromJSON(vars.MERGE_APPROVED_LABELS), github.event.label.name)
outputs:
branches: ${{ steps.getbranches.outputs.branches }}
all_branches: ${{ steps.checkbranches.outputs.all_branches }}
branch_count: ${{ steps.getbranches.outputs.branch_count }}
runs-on: ubuntu-latest
steps:
- name: Clean up labels
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--remove-label ${{github.event.label.name}} \
--remove-label ${{vars.PRE_MERGE_CHECKS_PASSED_LABEL}} \
--remove-label ${{vars.PRE_MERGE_CHECKS_FAILED_LABEL}} \
--remove-label ${{vars.PRE_MERGE_GATES_PASSED_LABEL}} \
--remove-label ${{vars.PRE_MERGE_GATES_FAILED_LABEL}} \
--remove-label ${{vars.PRE_MERGE_TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
- name: Get cherry-pick branches
uses: asterisk/asterisk-ci-actions/GetCherryPickBranchesFromPR@main
id: getbranches
with:
repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
cherry_pick_regex: ${{vars.CHERRY_PICK_REGEX}}
github_token: ${{secrets.GITHUB_TOKEN}}
- name: Check Branch Count
id: checkbranches
env:
BRANCH_COUNT: ${{ steps.getbranches.outputs.branch_count }}
BRANCHES: ${{ steps.getbranches.outputs.branches }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.PRE_MERGE_TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
all_branches=$(echo "$BRANCHES" | jq -c "[ \"$BASE_BRANCH\" ] + .")
echo "all_branches=${all_branches}" >>${GITHUB_OUTPUT}
- name: Pre Check Cherry-Picks
if: ${{ steps.getbranches.outputs.branch_count > 0 }}
uses: asterisk/asterisk-ci-actions/CherryPick@main
with:
repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
branches: ${{steps.getbranches.outputs.branches}}
github_token: ${{secrets.GITHUB_TOKEN}}
push: false
PreMergeUnitTestMatrix:
needs: [ IdentifyBranches ]
if: success()
continue-on-error: false
strategy:
fail-fast: false
matrix:
branch: ${{ fromJSON(needs.IdentifyBranches.outputs.all_branches) }}
runs-on: ubuntu-latest
steps:
- name: Run Unit Tests for branch ${{matrix.branch}}
uses: asterisk/asterisk-ci-actions/AsteriskUnitComposite@main
with:
asterisk_repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
base_branch: ${{matrix.branch}}
is_cherry_pick: true
modules_blacklist: ${{env.MODULES_BLACKLIST}}
github_token: ${{secrets.GITHUB_TOKEN}}
unittest_command: ${{vars.UNITTEST_COMMAND}}
PreMergeUnitTests:
needs: [ IdentifyBranches, PreMergeUnitTestMatrix ]
runs-on: ubuntu-latest
steps:
- name: Check unit test matrix status
env:
RESULT: ${{needs.PreMergeUnitTestMatrix.result}}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
case $RESULT in
success)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.PRE_MERGE_TESTING_IN_PROGRESS}} \
--add-label ${{vars.PRE_MERGE_CHECKS_PASSED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::notice::All tests passed"
exit 0
;;
skipped)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.PRE_MERGE_TESTING_IN_PROGRESS}} \
--add-label ${{vars.PRE_MERGE_CHECKS_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::notice::Unit tests were skipped because of an earlier failure"
exit 1
;;
*)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.PRE_MERGE_TESTING_IN_PROGRESS}} \
--add-label ${{vars.PRE_MERGE_CHECKS_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::error::One or more tests failed ($RESULT)"
exit 1
esac
MergeAndCherryPick:
needs: [ IdentifyBranches, PreMergeUnitTests ]
if: success()
concurrency:
group: MergeAndCherryPick
cancel-in-progress: false
runs-on: ubuntu-latest
steps:
- name: Start Merge
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.MERGE_IN_PROGRESS_LABEL}} \
${{env.PR_NUMBER}} || :
- name: Get Token needed to push cherry-picks
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@v2
with:
application_id: ${{secrets.ASTERISK_ORG_ACCESS_APP_ID}}
application_private_key: ${{secrets.ASTERISK_ORG_ACCESS_APP_PRIV_KEY}}
organization: asterisk
- name: Merge and Cherry Pick to ${{needs.IdentifyBranches.outputs.branches}}
id: mergecp
uses: asterisk/asterisk-ci-actions/MergeAndCherryPickComposite@main
with:
repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
branches: ${{needs.IdentifyBranches.outputs.branches}}
force: ${{env.FORCE}}
github_token: ${{steps.get_workflow_token.outputs.token}}
- name: Merge Cleanup
if: always()
env:
RESULT: ${{ steps.mergecp.outcome }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BRANCH_COUNT: ${{ needs.IdentifyBranches.outputs.branch_count }}
BRANCHES: ${{ needs.IdentifyBranches.outputs.branches }}
run: |
case $RESULT in
success)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.MERGE_IN_PROGRESS_LABEL}} \
${{env.PR_NUMBER}} || :
if [ $BRANCH_COUNT -eq 0 ] ; then
gh pr comment --repo ${{github.repository}} \
-b "Successfully merged to branch $BASE_BRANCH." \
${{env.PR_NUMBER}} || :
else
gh pr comment --repo ${{github.repository}} \
-b "Successfully merged to branch $BASE_BRANCH and cherry-picked to $BRANCHES" \
${{env.PR_NUMBER}} || :
fi
exit 0
;;
failure)
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.MERGE_IN_PROGRESS_LABEL}} \
--add-label ${{vars.MERGE_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
exit 1
;;
*)
esac

28
.github/workflows/NightlyAdmin.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Nightly Admin
on:
schedule:
- cron: '30 1 * * *'
env:
ASTERISK_REPO: ${{ github.repository }}
PR_NUMBER: 0
PR_COMMIT: ''
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
MODULES_BLACKLIST: ${{ vars.GATETEST_MODULES_BLACKLIST }} ${{ vars.UNITTEST_MODULES_BLACKLIST }}
jobs:
CloseStaleIssues:
runs-on: ubuntu-latest
steps:
- name: Close Stale Issues
uses: actions/stale@v7
with:
stale-issue-message: 'This issue is stale because it has been open 7 days with no activity. Remove stale label or comment or this will be closed in 14 days.'
stale-issue-label: stale
close-issue-message: 'This issue was closed because it has been stalled for 14 days with no activity.'
days-before-stale: 7
days-before-close: 14
days-before-pr-close: -1
only-labels: triage,feedback-required

59
.github/workflows/NightlyTests.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
name: NightlyTests
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * *'
env:
ASTERISK_REPO: ${{ github.repository }}
PR_NUMBER: 0
PR_COMMIT: ''
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
MODULES_BLACKLIST: ${{ vars.GATETEST_MODULES_BLACKLIST }}
jobs:
AsteriskNightly:
strategy:
fail-fast: false
matrix:
branch: ${{ fromJSON(vars.NIGHTLYTEST_BRANCHES) }}
group: ${{ fromJSON(vars.NIGHTLYTEST_LIST) }}
runs-on: ubuntu-latest
steps:
- name: Run Nightly Tests for ${{ matrix.group }}/${{ matrix.branch }}
uses: asterisk/asterisk-ci-actions/AsteriskGateComposite@main
with:
test_type: Nightly
asterisk_repo: ${{env.ASTERISK_REPO}}
pr_number: ${{env.PR_NUMBER}}
base_branch: ${{matrix.branch}}
modules_blacklist: ${{env.MODULES_BLACKLIST}}
github_token: ${{secrets.GITHUB_TOKEN}}
testsuite_repo: ${{vars.TESTSUITE_REPO}}
gatetest_group: ${{matrix.group}}
gatetest_commands: ${{vars.GATETEST_COMMANDS}}
AsteriskNightlyTests:
if: ${{ always() }}
runs-on: ubuntu-latest
needs: AsteriskNightly
steps:
- name: Check test matrix status
env:
RESULT: ${{needs.AsteriskNightly.result}}
run: |
case $RESULT in
success)
echo "::notice::All Testsuite tests passed"
exit 0
;;
skipped)
echo "::error::Testsuite tests were skipped because of an earlier failure"
exit 1
;;
*)
echo "::error::One or more Testsuite tests failed"
exit 1
esac

32
.github/workflows/PRMerged.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: PRMerged
run-name: "PR ${{github.event.number || inputs.pr_number}} ${{github.event.action || 'MANUAL POST MERGE'}} by ${{ github.actor }}"
on:
pull_request_target:
types: [closed]
workflow_dispatch:
inputs:
pr_number:
description: 'PR number'
required: true
type: number
concurrency:
group: ${{github.workflow}}-${{github.event.number || inputs.pr_number}}
cancel-in-progress: true
env:
REPO: ${{github.repository}}
PR_NUMBER: ${{github.event.number || inputs.pr_number}}
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
jobs:
CloseIssues:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- uses: wow-actions/auto-close-fixed-issues@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

199
.github/workflows/PROpenedOrUpdated.yml vendored Normal file
View File

@@ -0,0 +1,199 @@
name: PROpenedOrUpdated
run-name: "PR ${{github.event.number}} ${{github.event.action}} by ${{ github.actor }}"
on:
# workflow_dispatch:
pull_request_target:
# types: [opened, reopened, synchronize]
types: [labeled]
env:
ASTERISK_REPO: ${{github.repository}}
PR_NUMBER: ${{github.event.number}}
PR_COMMIT: ${{github.event.pull_request.head.sha}}
BRANCH: ${{github.event.pull_request.base.ref}}
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
MODULES_BLACKLIST: ${{vars.GATETEST_MODULES_BLACKLIST}} ${{vars.UNITTEST_MODULES_BLACKLIST}}
jobs:
PRTestSetup:
if: ${{ github.event.label.name == vars.PR_ACCEPTANCE_TEST_LABEL }}
runs-on: ubuntu-latest
steps:
- name: Job Start Delay
env:
JOB_START_DELAY_SEC: ${{vars.PR_JOB_START_DELAY_SEC}}
run: |
# Give the user a chance to add their "cherry-pick-to" comments
sleep ${JOB_START_DELAY_SEC:-60}
- name: Get Token needed to add reviewers
if: github.event.action == 'opened'
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@v2
with:
application_id: ${{secrets.ASTERISK_ORG_ACCESS_APP_ID}}
application_private_key: ${{secrets.ASTERISK_ORG_ACCESS_APP_PRIV_KEY}}
organization: asterisk
- name: Get cherry-pick branches
uses: asterisk/asterisk-ci-actions/GetCherryPickBranchesFromPR@main
id: getbranches
with:
repo: ${{github.repository}}
pr_number: ${{env.PR_NUMBER}}
cherry_pick_regex: ${{vars.CHERRY_PICK_REGEX}}
github_token: ${{secrets.GITHUB_TOKEN}}
- name: Add cherry-pick reminder
env:
GITHUB_TOKEN: ${{steps.get_workflow_token.outputs.token}}
GH_TOKEN: ${{steps.get_workflow_token.outputs.token}}
CHERRY_PICK_REMINDER: ${{vars.CHERRY_PICK_REMINDER}}
BRANCHES_OUTPUT: ${{toJSON(steps.getbranches.outputs)}}
BRANCH_COUNT: ${{steps.getbranches.outputs.branch_count}}
FORCED_NONE: ${{steps.getbranches.outputs.forced_none}}
run: |
# If the user already added "cherry-pick-to" comments
# we don't need to remind them.
( $FORCED_NONE || [ $BRANCH_COUNT -gt 0 ] ) && { echo "No reminder needed." ; exit 0 ; }
IFS=$'; \n'
# If there's already a reminder comment, don't add another one.
ADD_COMMENT=true
# This query will FAIL if it finds the comment.
gh pr view --repo ${{github.repository}} --json comments \
--jq '.comments[].body | select(. | startswith("<!--CPR-->")) | halt_error(1)' \
${{env.PR_NUMBER}} >/dev/null 2>&1 || ADD_COMMENT=false
if $ADD_COMMENT ; then
echo "Adding CPR comment"
gh pr comment --repo ${{github.repository}} \
-b "${CHERRY_PICK_REMINDER}" ${{env.PR_NUMBER}}
else
echo "CPR comment already present"
fi
- name: Add reviewers
if: github.event.action == 'opened'
env:
GH_TOKEN: ${{steps.get_workflow_token.outputs.token}}
REVIEWERS: ${{vars.PR_REVIEWERS}}
run: |
IFS=$'; \n'
for r in $REVIEWERS ; do
echo "Adding reviewer $r"
gh pr edit --repo ${{github.repository}} ${PR_NUMBER} --add-reviewer $r || :
done
- name: Set Labels
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.TEST_CHECKS_PASSED_LABEL}} \
--remove-label ${{vars.TEST_CHECKS_FAILED_LABEL}} \
--remove-label ${{vars.TEST_GATES_PASSED_LABEL}} \
--remove-label ${{vars.TEST_GATES_FAILED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_CHECKS_PASSED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_CHECKS_FAILED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_GATES_PASSED_LABEL}} \
--remove-label ${{vars.CHERRY_PICK_GATES_FAILED_LABEL}} \
--remove-label ${{vars.PR_ACCEPTANCE_TEST_LABEL}} \
--add-label ${{vars.TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
PRUnitTest:
needs: PRTestSetup
runs-on: ubuntu-latest
steps:
- name: Run Unit Tests
id: run_unit_tests
uses: asterisk/asterisk-ci-actions/AsteriskUnitComposite@main
with:
asterisk_repo: ${{env.ASTERISK_REPO}}
pr_number: ${{env.PR_NUMBER}}
base_branch: ${{env.BRANCH}}
modules_blacklist: ${{env.MODULES_BLACKLIST}}
github_token: ${{secrets.GITHUB_TOKEN}}
unittest_command: ${{vars.UNITTEST_COMMAND}}
continue-on-error: true
- name: Post Unit Test
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CONCLUSION: ${{ steps.run_unit_tests.conclusion }}
OUTCOME: ${{ steps.run_unit_tests.outcome }}
run: |
if [ "$OUTCOME" == "success" ] ; then
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.TEST_CHECKS_PASSED_LABEL}} \
${{env.PR_NUMBER}} || :
exit 0
fi
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.TESTING_IN_PROGRESS}} \
--add-label ${{vars.TEST_CHECKS_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
exit 1
PRGateTestMatrix:
needs: PRUnitTest
continue-on-error: false
strategy:
fail-fast: false
matrix:
group: ${{ fromJSON(vars.GATETEST_LIST) }}
runs-on: ubuntu-latest
steps:
- id: runtest
name: Run Gate Tests for ${{ matrix.group }}
uses: asterisk/asterisk-ci-actions/AsteriskGateComposite@main
with:
test_type: Gate
asterisk_repo: ${{env.ASTERISK_REPO}}
pr_number: ${{env.PR_NUMBER}}
base_branch: ${{env.BRANCH}}
modules_blacklist: ${{env.MODULES_BLACKLIST}}
github_token: ${{secrets.GITHUB_TOKEN}}
testsuite_repo: ${{vars.TESTSUITE_REPO}}
gatetest_group: ${{matrix.group}}
gatetest_commands: ${{vars.GATETEST_COMMANDS}}
PRPRGateTests:
if: ${{ always() && github.event.label.name == vars.PR_ACCEPTANCE_TEST_LABEL }}
runs-on: ubuntu-latest
needs: PRGateTestMatrix
steps:
- name: Check gate test matrix status
env:
RESULT: ${{ needs.PRGateTestMatrix.result }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "all results: ${{ toJSON(needs.*.result) }}"
echo "composite result: $RESULT"
gh pr edit --repo ${{github.repository}} \
--remove-label ${{vars.TESTING_IN_PROGRESS}} \
${{env.PR_NUMBER}} || :
case $RESULT in
success)
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.TEST_GATES_PASSED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::notice::All Testsuite tests passed"
exit 0
;;
skipped)
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.TEST_CHECKS_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::error::Testsuite tests were skipped because of an earlier failure"
exit 1
;;
*)
gh pr edit --repo ${{github.repository}} \
--add-label ${{vars.TEST_GATES_FAILED_LABEL}} \
${{env.PR_NUMBER}} || :
echo "::error::One or more Testsuite tests failed ($RESULT)"
exit 1
esac

101
.github/workflows/Releaser.yml vendored Normal file
View File

@@ -0,0 +1,101 @@
# yaml-language-server: $schema=https://json.schemastore.org/github-workflow.json
name: Releaser
run-name: ${{ github.actor }} is creating ${{vars.PRODUCT_NAME}} release ${{inputs.new_version}}
on:
workflow_dispatch:
inputs:
new_version:
description: |
New Version:
Examples:
20.4.0-rc1, 20.4.0-rc2, 20.4.0, 20.4.1
certified-20.4-cert1-rc1, certified-20.4-cert1
required: true
type: string
# start_version:
# description: |
# Last Version:
# Only use when you KNOW that the automated
# process won't get it right.
# required: false
# type: string
is_security:
description: |
Security?
(No prev RCs)
required: true
type: boolean
default: false
advisories:
description: |
Comma separated list of advisories.
NO SPACES
Example: GHSA-4xjp-22g4-9fxm,GHSA-4xjp-22g4-zzzz
required: false
type: string
is_hotfix:
description: |
Hotfix?
(A patch release but not security. No prev RCs)
required: true
type: boolean
default: false
push_release_branches:
description: |
Push release branches live?
required: true
type: boolean
default: false
create_github_release:
description: |
Create the GitHub release?
required: true
type: boolean
default: false
push_tarballs:
description: |
Push tarballs to downloads server?
required: true
type: boolean
default: false
send_email:
description: |
Send announcement emails?
required: true
type: boolean
default: false
jobs:
ReleaseAsterisk:
runs-on: ubuntu-latest
steps:
- name: Run Releaser
uses: asterisk/asterisk-ci-actions/ReleaserComposite@main
with:
product: ${{vars.PRODUCT_NAME}}
is_security: ${{inputs.is_security}}
advisories: ${{inputs.advisories}}
is_hotfix: ${{inputs.is_hotfix}}
new_version: ${{inputs.new_version}}
# start_version: ${{inputs.start_version}}
push_release_branches: ${{inputs.push_release_branches}}
create_github_release: ${{inputs.create_github_release}}
push_tarballs: ${{inputs.push_tarballs}}
send_email: ${{inputs.send_email}}
repo: ${{github.repository}}
mail_list_ga: ${{vars.MAIL_LIST_GA}}
mail_list_rc: ${{vars.MAIL_LIST_RC}}
mail_list_cert_ga: ${{vars.MAIL_LIST_CERT_GA}}
mail_list_cert_rc: ${{vars.MAIL_LIST_CERT_RC}}
mail_list_sec: ${{vars.MAIL_LIST_SEC_ADV}}
sec_adv_url_base: ${{vars.SEC_ADV_URL_BASE}}
gpg_private_key: ${{secrets.ASTDEV_GPG_PRIV_KEY}}
github_token: ${{secrets.GITHUB_TOKEN}}
application_id: ${{secrets.ASTERISK_ORG_ACCESS_APP_ID}}
application_private_key: ${{secrets.ASTERISK_ORG_ACCESS_APP_PRIV_KEY}}
asteriskteamsa_username: ${{secrets.ASTERISKTEAMSA_GMAIL_ACCT}}
asteriskteamsa_token: ${{secrets.ASTERISKTEAMSA_GMAIL_TOKEN}}
deploy_ssh_priv_key: ${{secrets.DOWNLOADS_DEPLOY_SSH_PRIV_KEY}}
deploy_ssh_username: ${{secrets.DOWNLOADS_DEPLOY_SSH_USERNAME}}
deploy_host: ${{vars.DEPLOY_HOST}}
deploy_dir: ${{vars.DEPLOY_DIR}}

View File

@@ -1119,7 +1119,8 @@ ifeq ($(PYTHON),:)
else
@$(INSTALL) -d doc/rest-api
$(PYTHON) rest-api-templates/make_ari_stubs.py \
rest-api/resources.json .
--resources rest-api/resources.json --source-dir $(ASTTOPDIR) \
--dest-dir $(ASTTOPDIR)/doc/rest-api --docs-prefix ../
endif
check-alembic: makeopts

17746
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -33,9 +33,9 @@ def upgrade():
enum = ENUM(*NEW_ENUM, name='pjsip_100rel_values_v2')
enum.create(op.get_bind(), checkfirst=False)
op.execute('ALTER TABLE ps_endpoints ALTER COLUMN 100rel TYPE'
op.execute('ALTER TABLE ps_endpoints ALTER COLUMN "100rel" TYPE'
' pjsip_100rel_values_v2 USING'
' 100rel::text::pjsip_100rel_values_v2')
' "100rel"::text::pjsip_100rel_values_v2')
ENUM(name="pjsip_100rel_values").drop(op.get_bind(), checkfirst=False)
@@ -50,8 +50,8 @@ def downgrade():
enum = ENUM(*OLD_ENUM, name='pjsip_100rel_values')
enum.create(op.get_bind(), checkfirst=False)
op.execute('ALTER TABLE ps_endpoints ALTER COLUMN 100rel TYPE'
op.execute('ALTER TABLE ps_endpoints ALTER COLUMN "100rel" TYPE'
' pjsip_100rel_values USING'
' 100rel::text::pjsip_100rel_values')
' "100rel"::text::pjsip_100rel_values')
ENUM(name="pjsip_100rel_values_v2").drop(op.get_bind(), checkfirst=False)

View File

@@ -19,14 +19,19 @@
of a mutex to its initializer. */
#undef CAN_COMPARE_MUTEX_TO_INIT_VALUE
/* Define to 1 if the `closedir' function returns void instead of int. */
/* Define to 1 if the `closedir' function returns void instead of `int'. */
#undef CLOSEDIR_VOID
/* Some configure tests will unexpectedly fail if configure is run by a
non-root user. These may be able to be tested at runtime. */
#undef CONFIGURE_RAN_AS_ROOT
/* Define to 1 if using 'alloca.c'. */
/* Define to one of `_getb67', `GETB67', `getb67' for Cray-2 and Cray-YMP
systems. This function is required for `alloca.c' support on those systems.
*/
#undef CRAY_STACKSEG_END
/* Define to 1 if using `alloca.c'. */
#undef C_ALLOCA
/* Define to 1 if anonymous semaphores work. */
@@ -38,10 +43,11 @@
/* Define to 1 if you have the `acosl' function. */
#undef HAVE_ACOSL
/* Define to 1 if you have 'alloca', as a function or macro. */
/* Define to 1 if you have `alloca', as a function or macro. */
#undef HAVE_ALLOCA
/* Define to 1 if <alloca.h> works. */
/* Define to 1 if you have <alloca.h> and it should be used (not on Ultrix).
*/
#undef HAVE_ALLOCA_H
/* Define to 1 if you have the Advanced Linux Sound Architecture library. */
@@ -499,12 +505,12 @@
/* Define to 1 if you have the `memmove' function. */
#undef HAVE_MEMMOVE
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the `memset' function. */
#undef HAVE_MEMSET
/* Define to 1 if you have the <minix/config.h> header file. */
#undef HAVE_MINIX_CONFIG_H
/* Define to 1 if you have the `mkdir' function. */
#undef HAVE_MKDIR
@@ -1244,9 +1250,6 @@
/* Define to 1 if you have the `vprintf' function. */
#undef HAVE_VPRINTF
/* Define to 1 if you have the <wchar.h> header file. */
#undef HAVE_WCHAR_H
/* Define to 1 if you have the <winsock2.h> header file. */
#undef HAVE_WINSOCK2_H
@@ -1401,13 +1404,10 @@
STACK_DIRECTION = 0 => direction of growth unknown */
#undef STACK_DIRECTION
/* Define to 1 if all of the C90 standard headers exist (not just the ones
required in a freestanding environment). This macro is provided for
backward compatibility; new code need not use it. */
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. This
macro is obsolete. */
/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
#undef TIME_WITH_SYS_TIME
/* Define to 1 if your <sys/time.h> declares `struct tm'. */
@@ -1420,93 +1420,32 @@
#ifndef _ALL_SOURCE
# undef _ALL_SOURCE
#endif
/* Enable general extensions on macOS. */
#ifndef _DARWIN_C_SOURCE
# undef _DARWIN_C_SOURCE
#endif
/* Enable general extensions on Solaris. */
#ifndef __EXTENSIONS__
# undef __EXTENSIONS__
#endif
/* Enable GNU extensions on systems that have them. */
#ifndef _GNU_SOURCE
# undef _GNU_SOURCE
#endif
/* Enable X/Open compliant socket functions that do not require linking
with -lxnet on HP-UX 11.11. */
#ifndef _HPUX_ALT_XOPEN_SOCKET_API
# undef _HPUX_ALT_XOPEN_SOCKET_API
#endif
/* Identify the host operating system as Minix.
This macro does not affect the system headers' behavior.
A future release of Autoconf may stop defining this macro. */
#ifndef _MINIX
# undef _MINIX
#endif
/* Enable general extensions on NetBSD.
Enable NetBSD compatibility extensions on Minix. */
#ifndef _NETBSD_SOURCE
# undef _NETBSD_SOURCE
#endif
/* Enable OpenBSD compatibility extensions on NetBSD.
Oddly enough, this does nothing on OpenBSD. */
#ifndef _OPENBSD_SOURCE
# undef _OPENBSD_SOURCE
#endif
/* Define to 1 if needed for POSIX-compatible behavior. */
#ifndef _POSIX_SOURCE
# undef _POSIX_SOURCE
#endif
/* Define to 2 if needed for POSIX-compatible behavior. */
#ifndef _POSIX_1_SOURCE
# undef _POSIX_1_SOURCE
#endif
/* Enable POSIX-compatible threading on Solaris. */
/* Enable threading extensions on Solaris. */
#ifndef _POSIX_PTHREAD_SEMANTICS
# undef _POSIX_PTHREAD_SEMANTICS
#endif
/* Enable extensions specified by ISO/IEC TS 18661-5:2014. */
#ifndef __STDC_WANT_IEC_60559_ATTRIBS_EXT__
# undef __STDC_WANT_IEC_60559_ATTRIBS_EXT__
#endif
/* Enable extensions specified by ISO/IEC TS 18661-1:2014. */
#ifndef __STDC_WANT_IEC_60559_BFP_EXT__
# undef __STDC_WANT_IEC_60559_BFP_EXT__
#endif
/* Enable extensions specified by ISO/IEC TS 18661-2:2015. */
#ifndef __STDC_WANT_IEC_60559_DFP_EXT__
# undef __STDC_WANT_IEC_60559_DFP_EXT__
#endif
/* Enable extensions specified by ISO/IEC TS 18661-4:2015. */
#ifndef __STDC_WANT_IEC_60559_FUNCS_EXT__
# undef __STDC_WANT_IEC_60559_FUNCS_EXT__
#endif
/* Enable extensions specified by ISO/IEC TS 18661-3:2015. */
#ifndef __STDC_WANT_IEC_60559_TYPES_EXT__
# undef __STDC_WANT_IEC_60559_TYPES_EXT__
#endif
/* Enable extensions specified by ISO/IEC TR 24731-2:2010. */
#ifndef __STDC_WANT_LIB_EXT2__
# undef __STDC_WANT_LIB_EXT2__
#endif
/* Enable extensions specified by ISO/IEC 24747:2009. */
#ifndef __STDC_WANT_MATH_SPEC_FUNCS__
# undef __STDC_WANT_MATH_SPEC_FUNCS__
#endif
/* Enable extensions on HP NonStop. */
#ifndef _TANDEM_SOURCE
# undef _TANDEM_SOURCE
#endif
/* Enable X/Open extensions. Define to 500 only if necessary
to make mbstate_t available. */
#ifndef _XOPEN_SOURCE
# undef _XOPEN_SOURCE
/* Enable general extensions on Solaris. */
#ifndef __EXTENSIONS__
# undef __EXTENSIONS__
#endif
/* Define to 1 if running on Darwin. */
#undef _DARWIN_UNLIMITED_SELECT
/* Enable large inode numbers on Mac OS X 10.5. */
#ifndef _DARWIN_USE_64_BIT_INODE
# define _DARWIN_USE_64_BIT_INODE 1
#endif
/* Number of bits in a file offset, on hosts where this is settable. */
#undef _FILE_OFFSET_BITS
@@ -1524,6 +1463,16 @@
/* Define for large files, on AIX-style hosts. */
#undef _LARGE_FILES
/* Define to 1 if on MINIX. */
#undef _MINIX
/* Define to 2 if the system does not provide POSIX.1 features except with
this defined. */
#undef _POSIX_1_SOURCE
/* Define to 1 if you need to in order for `stat' and other things to work. */
#undef _POSIX_SOURCE
/* Define to empty if `const' does not conform to ANSI C. */
#undef const
@@ -1545,7 +1494,7 @@
/* Define to `long int' if <sys/types.h> does not define. */
#undef off_t
/* Define as a signed integer type capable of holding a process identifier. */
/* Define to `int' if <sys/types.h> does not define. */
#undef pid_t
/* Define to `unsigned int' if <sys/types.h> does not define. */

View File

@@ -4,13 +4,19 @@
#define MENUSELECT_AUTOCONFIG_H
/* Define to 1 if using 'alloca.c'. */
/* Define to one of `_getb67', `GETB67', `getb67' for Cray-2 and Cray-YMP
systems. This function is required for `alloca.c' support on those systems.
*/
#undef CRAY_STACKSEG_END
/* Define to 1 if using `alloca.c'. */
#undef C_ALLOCA
/* Define to 1 if you have 'alloca', as a function or macro. */
/* Define to 1 if you have `alloca', as a function or macro. */
#undef HAVE_ALLOCA
/* Define to 1 if <alloca.h> works. */
/* Define to 1 if you have <alloca.h> and it should be used (not on Ultrix).
*/
#undef HAVE_ALLOCA_H
/* Define to 1 if you have the `asprintf' function. */
@@ -31,6 +37,9 @@
/* Define if your system has the LIBXML2 libraries. */
#undef HAVE_LIBXML2
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the ncurses library. */
#undef HAVE_NCURSES
@@ -43,9 +52,6 @@
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdio.h> header file. */
#undef HAVE_STDIO_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
@@ -111,9 +117,7 @@
STACK_DIRECTION = 0 => direction of growth unknown */
#undef STACK_DIRECTION
/* Define to 1 if all of the C90 standard headers exist (not just the ones
required in a freestanding environment). This macro is provided for
backward compatibility; new code need not use it. */
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Define to `unsigned int' if <sys/types.h> does not define. */

3432
menuselect/configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -226,6 +226,7 @@ static int transport_create(void *data)
pj_strdup2(pool, &newtransport->transport.local_name.host, ast_sockaddr_stringify_addr(ast_websocket_local_address(newtransport->ws_session)));
newtransport->transport.local_name.port = ast_sockaddr_port(ast_websocket_local_address(newtransport->ws_session));
pj_strdup2(pool, &newtransport->transport.remote_name.host, ast_sockaddr_stringify_addr(ast_websocket_remote_address(newtransport->ws_session)));
newtransport->transport.remote_name.port = ast_sockaddr_port(ast_websocket_remote_address(newtransport->ws_session));
newtransport->transport.flag = pjsip_transport_get_flag_from_type((pjsip_transport_type_e)newtransport->transport.key.type);
newtransport->transport.dir = PJSIP_TP_DIR_INCOMING;

View File

@@ -1,73 +1,75 @@
{{#api_declaration}}
h1. {{name_title}}
|| Method || Path<br>h5. Parameters are case-sensitive || Return Model || Summary ||
# {{name_title}}
| Method | Path (Parameters are case-sensitive) | Return Model | Summary |
|:------ |:------------------------------------ |:------------ |:------- |
{{#apis}}
{{#operations}}
| {{http_method}} | [{{wiki_path}}|#{{nickname}}] | {{#response_class}}{{#is_primitive}}{{name}}{{/is_primitive}}{{^is_primitive}}[{{wiki_name}}|{{wiki_prefix}} REST Data Models#{{singular_name}}]{{/is_primitive}}{{/response_class}} | {{{summary}}} |
| {{http_method}} | [{{wiki_path}}](#{{nickname_lc}}) | {{#response_class}}{{#is_primitive}}{{name}}{{/is_primitive}}{{^is_primitive}}[{{wiki_name}}]({{wiki_prefix}}Asterisk_REST_Data_Models#{{lc_singular_name}}){{/is_primitive}}{{/response_class}} | {{{summary}}} |
{{/operations}}
{{/apis}}
{{#apis}}
{{#operations}}
{anchor:{{nickname}}}
h2. {{nickname}}: {{http_method}} {{wiki_path}}
---
[//]: # (anchor:{{nickname_lc}})
## {{nickname}}
### {{http_method}} {{wiki_path}}
{{{wiki_summary}}}{{#wiki_notes}} {{{wiki_notes}}}{{/wiki_notes}}
{{#has_path_parameters}}
h3. Path parameters
### Path parameters
Parameters are case-sensitive.
{{#path_parameters}}
* {{name}}: _{{data_type}}_ - {{{wiki_description}}}
{{#default_value}}
** Default: {{default_value}}
* Default: {{default_value}}
{{/default_value}}
{{#wiki_allowable_values}}
** {{wiki_allowable_values}}
* {{wiki_allowable_values}}
{{/wiki_allowable_values}}
{{/path_parameters}}
{{/has_path_parameters}}
{{#has_query_parameters}}
h3. Query parameters
### Query parameters
{{#query_parameters}}
* {{name}}: _{{data_type}}_ -{{#required}} *(required)*{{/required}} {{{wiki_description}}}
{{#default_value}}
** Default: {{default_value}}
* Default: {{default_value}}
{{/default_value}}
{{#wiki_allowable_values}}
** {{wiki_allowable_values}}
* {{wiki_allowable_values}}
{{/wiki_allowable_values}}
{{#allow_multiple}}
** Allows comma separated values.
* Allows comma separated values.
{{/allow_multiple}}
{{/query_parameters}}
{{/has_query_parameters}}
{{#has_body_parameter}}
h3. Body parameter
### Body parameter
{{#body_parameter}}
* {{name}}: {{data_type}}{{#default_value}} = {{default_value}}{{/default_value}} -{{#required}} *(required)*{{/required}} {{{wiki_description}}}
{{#allow_multiple}}
** Allows comma separated values.
* Allows comma separated values.
{{/allow_multiple}}
{{/body_parameter}}
{{/has_body_parameter}}
{{#has_header_parameters}}
h3. Header parameters
### Header parameters
{{#header_parameters}}
* {{name}}: {{data_type}}{{#default_value}} = {{default_value}}{{/default_value}} -{{#required}} *(required)*{{/required}} {{{wiki_description}}}
{{#allow_multiple}}
** Allows comma separated values.
* Allows comma separated values.
{{/allow_multiple}}
{{/header_parameters}}
{{/has_header_parameters}}
{{#has_error_responses}}
h3. Error Responses
### Error Responses
{{#error_responses}}
* {{code}} - {{{wiki_reason}}}
{{/error_responses}}

View File

@@ -28,7 +28,7 @@ except ImportError:
import os.path
from asterisk_processor import AsteriskProcessor
from optparse import OptionParser
from argparse import ArgumentParser as ArgParser
from swagger_model import ResourceListing
from transform import Transform
@@ -42,55 +42,61 @@ def rel(file):
"""
return os.path.join(TOPDIR, file)
WIKI_PREFIX = 'Asterisk 19'
API_TRANSFORMS = [
Transform(rel('api.wiki.mustache'),
'doc/rest-api/%s {{name_title}} REST API.wiki' % WIKI_PREFIX),
Transform(rel('res_ari_resource.c.mustache'),
'res/res_ari_{{c_name}}.c'),
Transform(rel('ari_resource.h.mustache'),
'res/ari/resource_{{c_name}}.h'),
Transform(rel('ari_resource.c.mustache'),
'res/ari/resource_{{c_name}}.c', overwrite=False),
]
RESOURCES_TRANSFORMS = [
Transform(rel('models.wiki.mustache'),
'doc/rest-api/%s REST Data Models.wiki' % WIKI_PREFIX),
Transform(rel('ari.make.mustache'), 'res/ari.make'),
Transform(rel('ari_model_validators.h.mustache'),
'res/ari/ari_model_validators.h'),
Transform(rel('ari_model_validators.c.mustache'),
'res/ari/ari_model_validators.c'),
]
def main(argv):
parser = OptionParser(usage="Usage %prog [resources.json] [destdir]")
description = (
'Command line utility to export ARI documentation to markdown'
)
(options, args) = parser.parse_args(argv)
parser = ArgParser(description=description)
parser.add_argument('--resources', type=str, default="rest-api/resources.json",
help="resources.json file to process", required=False)
parser.add_argument('--source-dir', type=str, default=".",
help="Asterisk source directory", required=False)
parser.add_argument('--dest-dir', type=str, default="doc/rest-api",
help="Destination directory", required=False)
parser.add_argument('--docs-prefix', type=str, default="../",
help="Prefix to apply to links", required=False)
if len(args) != 3:
parser.error("Wrong number of arguments")
args = parser.parse_args()
if not args:
return
source = args[1]
dest_dir = args[2]
renderer = pystache.Renderer(search_dirs=[TOPDIR], missing_tags='strict')
processor = AsteriskProcessor(wiki_prefix=WIKI_PREFIX)
processor = AsteriskProcessor(wiki_prefix=args.docs_prefix)
API_TRANSFORMS = [
Transform(rel('api.wiki.mustache'),
'%s/{{name_title}}_REST_API.md' % args.dest_dir),
Transform(rel('res_ari_resource.c.mustache'),
'res/res_ari_{{c_name}}.c'),
Transform(rel('ari_resource.h.mustache'),
'res/ari/resource_{{c_name}}.h'),
Transform(rel('ari_resource.c.mustache'),
'res/ari/resource_{{c_name}}.c', overwrite=False),
]
RESOURCES_TRANSFORMS = [
Transform(rel('models.wiki.mustache'),
'%s/Asterisk_REST_Data_Models.md' % args.dest_dir),
Transform(rel('ari.make.mustache'), 'res/ari.make'),
Transform(rel('ari_model_validators.h.mustache'),
'res/ari/ari_model_validators.h'),
Transform(rel('ari_model_validators.c.mustache'),
'res/ari/ari_model_validators.c'),
]
# Build the models
base_dir = os.path.dirname(source)
resources = ResourceListing().load_file(source, processor)
base_dir = os.path.dirname(args.resources)
resources = ResourceListing().load_file(args.resources, processor)
for api in resources.apis:
api.load_api_declaration(base_dir, processor)
# Render the templates
for api in resources.apis:
for transform in API_TRANSFORMS:
transform.render(renderer, api, dest_dir)
transform.render(renderer, api, args.source_dir)
for transform in RESOURCES_TRANSFORMS:
transform.render(renderer, resources, dest_dir)
transform.render(renderer, resources, args.source_dir)
if __name__ == "__main__":
sys.exit(main(sys.argv) or 0)

View File

@@ -1,18 +1,18 @@
{toc}
---
title: Asterisk REST Data Models
---
# Asterisk REST Data Models
{{#apis}}
{{#api_declaration}}
{{#models}}
h1. {{id}}
{{#extends}}Base type: [{{extends}}|#{{extends}}]{{/extends}}
{{#has_subtypes}}Subtypes:{{#all_subtypes}} [{{id}}|#{{id}}]{{/all_subtypes}}{{/has_subtypes}}
{{#wiki_description}}
{{{wiki_description}}}
{{/wiki_description}}
{code:language=javascript|collapse=true}
## {{id}}
{{#extends}}Base type: [{{extends}}](#{{extends_lc}}){{/extends}}
{{#has_subtypes}}Subtypes:{{#all_subtypes}} [{{id}}](#{{id_lc}}){{/all_subtypes}}{{/has_subtypes}}
### Model
``` javascript title="{{id}}" linenums="1"
{{{model_json}}}
{code}
```
### Properties
{{#properties}}
* {{name}}: {{#type}}{{#is_primitive}}{{wiki_name}}{{/is_primitive}}{{^is_primitive}}[{{wiki_name}}|#{{singular_name}}]{{/is_primitive}}{{/type}}{{^required}} _(optional)_{{/required}}{{#wiki_description}} - {{{wiki_description}}}{{/wiki_description}}
{{/properties}}

View File

@@ -332,6 +332,7 @@ class SwaggerType(Stringify):
self.is_discriminator = None
self.is_list = None
self.singular_name = None
self.lc_singular_name = None
self.is_primitive = None
self.is_binary = None
@@ -345,8 +346,10 @@ class SwaggerType(Stringify):
self.is_list = type_param is not None
if self.is_list:
self.singular_name = type_param
self.lc_singular_name = type_param.lower()
else:
self.singular_name = self.name
self.lc_singular_name = self.name.lower()
self.is_primitive = self.singular_name in SWAGGER_PRIMITIVES
self.is_binary = (self.singular_name == 'binary')
processor.process_type(self, context)
@@ -364,6 +367,7 @@ class Operation(Stringify):
def __init__(self):
self.http_method = None
self.nickname = None
self.nickname_lc = None
self.response_class = None
self.parameters = []
self.summary = None
@@ -375,6 +379,7 @@ class Operation(Stringify):
validate_required_fields(op_json, self.required_fields, context)
self.http_method = op_json.get('httpMethod')
self.nickname = op_json.get('nickname')
self.nickname_lc = self.nickname.lower()
response_class = op_json.get('responseClass')
self.response_class = response_class and SwaggerType().load(
response_class, processor, context)
@@ -498,6 +503,7 @@ class Model(Stringify):
def __init__(self):
self.id = None
self.id_lc = None
self.subtypes = []
self.__subtype_types = []
self.notes = None
@@ -511,6 +517,7 @@ class Model(Stringify):
validate_required_fields(model_json, self.required_fields, context)
# The duplication of the model's id is required by the Swagger spec.
self.id = model_json.get('id')
self.id_lc = self.id.lower()
if id != self.id:
raise SwaggerError("Model id doesn't match name", context)
self.subtypes = model_json.get('subTypes') or []
@@ -548,6 +555,9 @@ class Model(Stringify):
def extends(self):
return self.__extends_type and self.__extends_type.id
def extends_lc(self):
return self.__extends_type and self.__extends_type.id_lc
def set_extends_type(self, extends_type):
self.__extends_type = extends_type

View File

@@ -212,7 +212,11 @@ AST_TEST_DEFINE(channel_messages)
struct stasis_message *msg;
struct stasis_message_type *type;
struct ast_endpoint_snapshot *actual_snapshot;
int expected_count;
int actual_count;
int i;
int channel_index = -1;
int endpoint_index = -1;
switch (cmd) {
case TEST_INIT:
@@ -255,19 +259,23 @@ AST_TEST_DEFINE(channel_messages)
ast_hangup(chan);
chan = NULL;
actual_count = stasis_message_sink_wait_for_count(sink, 3,
expected_count = 3;
actual_count = stasis_message_sink_wait_for_count(sink, expected_count,
STASIS_SINK_DEFAULT_WAIT);
ast_test_validate(test, 3 == actual_count);
ast_test_validate(test, expected_count == actual_count);
msg = sink->messages[1];
type = stasis_message_type(msg);
ast_test_validate(test, ast_channel_snapshot_type() == type);
msg = sink->messages[2];
type = stasis_message_type(msg);
ast_test_validate(test, ast_endpoint_snapshot_type() == type);
actual_snapshot = stasis_message_data(msg);
for (i = 0; i < expected_count; i++) {
msg = sink->messages[i];
type = stasis_message_type(msg);
if (type == ast_channel_snapshot_type()) {
channel_index = i;
}
if (type == ast_endpoint_snapshot_type()) {
endpoint_index = i;
}
}
ast_test_validate(test, channel_index >= 0 && endpoint_index >= 0);
actual_snapshot = stasis_message_data(sink->messages[endpoint_index]);
ast_test_validate(test, 0 == actual_snapshot->num_channels);
return AST_TEST_PASS;

View File

@@ -6,7 +6,6 @@ if [ "$1" = "-q" ] ; then
fi
PATCH=${PATCH:-patch}
FIND=${FIND:-find}
patchdir=${1:?You must supply a patches directory}
sourcedir=${2?:You must supply a source directory}
@@ -21,15 +20,18 @@ if [ ! -d "$sourcedir" ] ; then
exit 1
fi
patches=$(${FIND} "$patchdir" -name "*.patch")
if [ x"$patches" = x"" ] ; then
echo "No patches in $patchdir" >&2
exit 0
fi
# Patterns used in filename expansion (globs) are sorted according to the
# current locale, so there is no need to do it explicitly.
for patchfile in "$patchdir"/*.patch ; do
# A glob that doesn't match is not replaced, so we handle that here. We
# should only fail this test if there are no patch files.
[ -f "$patchfile" ] || {
echo "No patches in $patchdir" >&2
exit 0
}
for patchfile in ${patches} ; do
[ -z $quiet ] && echo "Applying patch $(basename $patchfile)"
${PATCH} -d "$sourcedir" -p1 -s -i "$patchfile" || exit 1
[ -z "$quiet" ] && echo "Applying patch $(basename "$patchfile")"
${PATCH} -d "$sourcedir" -p1 -i "$patchfile" >/dev/null || exit 1
done
exit 0

View File

@@ -0,0 +1,127 @@
From 863629bc65d68518d85cf94758725da3042c2445 Mon Sep 17 00:00:00 2001
From: johado <papputten@gmail.com>
Date: Mon, 18 Apr 2022 06:08:33 +0200
Subject: [PATCH] Fix double free of ossock->ossl_ctx in case of errors (#3069)
(#3070)
---
pjlib/src/pj/ssl_sock_ossl.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/pjlib/src/pj/ssl_sock_ossl.c b/pjlib/src/pj/ssl_sock_ossl.c
index ed441e3e2..180ef0fe6 100644
--- a/pjlib/src/pj/ssl_sock_ossl.c
+++ b/pjlib/src/pj/ssl_sock_ossl.c
@@ -1167,20 +1167,21 @@ static pj_status_t init_ossl_ctx(pj_ssl_sock_t *ssock)
PJ_PERROR(1,(ssock->pool->obj_name, status,
"Error loading CA list file '%s'",
cert->CA_file.ptr));
}
if (cert->CA_path.slen) {
PJ_PERROR(1,(ssock->pool->obj_name, status,
"Error loading CA path '%s'",
cert->CA_path.ptr));
}
SSL_CTX_free(ctx);
+ ossock->ossl_ctx = NULL;
return status;
} else {
PJ_LOG(4,(ssock->pool->obj_name,
"CA certificates loaded from '%s%s%s'",
cert->CA_file.ptr,
((cert->CA_file.slen && cert->CA_path.slen)?
" + ":""),
cert->CA_path.ptr));
}
}
@@ -1197,20 +1198,21 @@ static pj_status_t init_ossl_ctx(pj_ssl_sock_t *ssock)
/* Load certificate chain from file into ctx */
rc = SSL_CTX_use_certificate_chain_file(ctx, cert->cert_file.ptr);
if(rc != 1) {
status = GET_SSL_STATUS(ssock);
PJ_PERROR(1,(ssock->pool->obj_name, status,
"Error loading certificate chain file '%s'",
cert->cert_file.ptr));
SSL_CTX_free(ctx);
+ ossock->ossl_ctx = NULL;
return status;
} else {
PJ_LOG(4,(ssock->pool->obj_name,
"Certificate chain loaded from '%s'",
cert->cert_file.ptr));
}
}
/* Load private key if one is specified */
@@ -1218,20 +1220,21 @@ static pj_status_t init_ossl_ctx(pj_ssl_sock_t *ssock)
/* Adds the first private key found in file to ctx */
rc = SSL_CTX_use_PrivateKey_file(ctx, cert->privkey_file.ptr,
SSL_FILETYPE_PEM);
if(rc != 1) {
status = GET_SSL_STATUS(ssock);
PJ_PERROR(1,(ssock->pool->obj_name, status,
"Error adding private key from '%s'",
cert->privkey_file.ptr));
SSL_CTX_free(ctx);
+ ossock->ossl_ctx = NULL;
return status;
} else {
PJ_LOG(4,(ssock->pool->obj_name,
"Private key loaded from '%s'",
cert->privkey_file.ptr));
}
#if !defined(OPENSSL_NO_DH)
if (ssock->is_server) {
bio = BIO_new_file(cert->privkey_file.ptr, "r");
@@ -1267,20 +1270,21 @@ static pj_status_t init_ossl_ctx(pj_ssl_sock_t *ssock)
xcert = PEM_read_bio_X509(cbio, NULL, 0, NULL);
if (xcert != NULL) {
rc = SSL_CTX_use_certificate(ctx, xcert);
if (rc != 1) {
status = GET_SSL_STATUS(ssock);
PJ_PERROR(1,(ssock->pool->obj_name, status,
"Error loading chain certificate from buffer"));
X509_free(xcert);
BIO_free(cbio);
SSL_CTX_free(ctx);
+ ossock->ossl_ctx = NULL;
return status;
} else {
PJ_LOG(4,(ssock->pool->obj_name,
"Certificate chain loaded from buffer"));
}
X509_free(xcert);
}
BIO_free(cbio);
}
}
@@ -1335,20 +1339,21 @@ static pj_status_t init_ossl_ctx(pj_ssl_sock_t *ssock)
cert);
if (pkey) {
rc = SSL_CTX_use_PrivateKey(ctx, pkey);
if (rc != 1) {
status = GET_SSL_STATUS(ssock);
PJ_PERROR(1,(ssock->pool->obj_name, status,
"Error adding private key from buffer"));
EVP_PKEY_free(pkey);
BIO_free(kbio);
SSL_CTX_free(ctx);
+ ossock->ossl_ctx = NULL;
return status;
} else {
PJ_LOG(4,(ssock->pool->obj_name,
"Private key loaded from buffer"));
}
EVP_PKEY_free(pkey);
} else {
PJ_LOG(1,(ssock->pool->obj_name,
"Error reading private key from buffer"));
}
--
2.41.0

View File

@@ -0,0 +1,44 @@
From 0fb32cd4c0b2f83c1f98b9dd46da713d9a433a93 Mon Sep 17 00:00:00 2001
From: Andreas Wehrmann <andreas-wehrmann@users.noreply.github.com>
Date: Tue, 27 Sep 2022 10:09:03 +0200
Subject: [PATCH] free SSL context and reset context pointer when setting the
cipher list fails; this is a followup of issue #3069 (#3245)
---
pjlib/src/pj/ssl_sock_ossl.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/pjlib/src/pj/ssl_sock_ossl.c b/pjlib/src/pj/ssl_sock_ossl.c
index c24472fec..554324305 100644
--- a/pjlib/src/pj/ssl_sock_ossl.c
+++ b/pjlib/src/pj/ssl_sock_ossl.c
@@ -1214,22 +1214,25 @@ static pj_status_t init_ossl_ctx(pj_ssl_sock_t *ssock)
PJ_LOG(1, (THIS_FILE, "Warning! Unable to set server session id "
"context. Session reuse will not work."));
}
}
if (ssl_opt)
SSL_CTX_set_options(ctx, ssl_opt);
/* Set cipher list */
status = set_cipher_list(ssock);
- if (status != PJ_SUCCESS)
+ if (status != PJ_SUCCESS) {
+ SSL_CTX_free(ctx);
+ ossock->ossl_ctx = NULL;
return status;
+ }
/* Apply credentials */
if (cert) {
/* Load CA list if one is specified. */
if (cert->CA_file.slen || cert->CA_path.slen) {
rc = SSL_CTX_load_verify_locations(
ctx,
cert->CA_file.slen == 0 ? NULL : cert->CA_file.ptr,
cert->CA_path.slen == 0 ? NULL : cert->CA_path.ptr);
--
2.41.0

View File

@@ -0,0 +1,203 @@
From 3ba8f3c0188fa05bb62d8bc9176ca7c7db79f8c0 Mon Sep 17 00:00:00 2001
From: Nanang Izzuddin <nanang@teluu.com>
Date: Tue, 20 Dec 2022 11:39:12 +0700
Subject: [PATCH 300/303] Merge pull request from GHSA-9pfh-r8x4-w26w
* Fix buffer overread in STUN message decoder
* Updates based on comments
---
pjnath/include/pjnath/stun_msg.h | 4 ++++
pjnath/src/pjnath/stun_msg.c | 32 ++++++++++++++++++++------------
2 files changed, 24 insertions(+), 12 deletions(-)
diff --git a/pjnath/include/pjnath/stun_msg.h b/pjnath/include/pjnath/stun_msg.h
index 6b5fc0f21..e8f52db3c 100644
--- a/pjnath/include/pjnath/stun_msg.h
+++ b/pjnath/include/pjnath/stun_msg.h
@@ -436,20 +436,21 @@ typedef enum pj_stun_status
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Transaction ID
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\endverbatim
*/
+#pragma pack(1)
typedef struct pj_stun_msg_hdr
{
/**
* STUN message type, which the first two bits must be zeroes.
*/
pj_uint16_t type;
/**
* The message length is the size, in bytes, of the message not
* including the 20 byte STUN header.
@@ -467,53 +468,56 @@ typedef struct pj_stun_msg_hdr
* The transaction ID is a 96 bit identifier. STUN transactions are
* identified by their unique 96-bit transaction ID. For request/
* response transactions, the transaction ID is chosen by the STUN
* client and MUST be unique for each new STUN transaction generated by
* that STUN client. The transaction ID MUST be uniformly and randomly
* distributed between 0 and 2**96 - 1.
*/
pj_uint8_t tsx_id[12];
} pj_stun_msg_hdr;
+#pragma pack()
/**
* This structre describes STUN attribute header. Each attribute is
* TLV encoded, with a 16 bit type, 16 bit length, and variable value.
* Each STUN attribute ends on a 32 bit boundary:
*
* \verbatim
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
\endverbatim
*/
+#pragma pack(1)
typedef struct pj_stun_attr_hdr
{
/**
* STUN attribute type.
*/
pj_uint16_t type;
/**
* The Length refers to the length of the actual useful content of the
* Value portion of the attribute, measured in bytes. The value
* in the Length field refers to the length of the Value part of the
* attribute prior to padding - i.e., the useful content.
*/
pj_uint16_t length;
} pj_stun_attr_hdr;
+#pragma pack()
/**
* This structure describes STUN generic IP address attribute, used for
* example to represent STUN MAPPED-ADDRESS attribute.
*
* The generic IP address attribute indicates the transport address.
* It consists of an eight bit address family, and a sixteen bit port,
* followed by a fixed length value representing the IP address. If the
* address family is IPv4, the address is 32 bits, in network byte
diff --git a/pjnath/src/pjnath/stun_msg.c b/pjnath/src/pjnath/stun_msg.c
index bd83351e6..fd15230bc 100644
--- a/pjnath/src/pjnath/stun_msg.c
+++ b/pjnath/src/pjnath/stun_msg.c
@@ -739,22 +739,22 @@ PJ_DEF(int) pj_stun_set_padding_char(int chr)
int old_pad = padding_char;
padding_char = chr;
return old_pad;
}
//////////////////////////////////////////////////////////////////////////////
#define INIT_ATTR(a,t,l) (a)->hdr.type=(pj_uint16_t)(t), \
- (a)->hdr.length=(pj_uint16_t)(l)
-#define ATTR_HDR_LEN 4
+ (a)->hdr.length=(pj_uint16_t)(l)
+#define ATTR_HDR_LEN sizeof(pj_stun_attr_hdr)
static pj_uint16_t GETVAL16H(const pj_uint8_t *buf, unsigned pos)
{
return (pj_uint16_t) ((buf[pos + 0] << 8) | \
(buf[pos + 1] << 0));
}
/*unused PJ_INLINE(pj_uint16_t) GETVAL16N(const pj_uint8_t *buf, unsigned pos)
{
return pj_htons(GETVAL16H(buf,pos));
@@ -2318,56 +2318,64 @@ PJ_DEF(pj_status_t) pj_stun_msg_decode(pj_pool_t *pool,
PJ_ASSERT_RETURN(pool && pdu && pdu_len && p_msg, PJ_EINVAL);
PJ_ASSERT_RETURN(sizeof(pj_stun_msg_hdr) == 20, PJ_EBUG);
if (p_parsed_len)
*p_parsed_len = 0;
if (p_response)
*p_response = NULL;
/* Check if this is a STUN message, if necessary */
if (options & PJ_STUN_CHECK_PACKET) {
- status = pj_stun_msg_check(pdu, pdu_len, options);
- if (status != PJ_SUCCESS)
- return status;
+ status = pj_stun_msg_check(pdu, pdu_len, options);
+ if (status != PJ_SUCCESS)
+ return status;
+ } else {
+ /* For safety, verify packet length at least */
+ pj_uint32_t msg_len = GETVAL16H(pdu, 2) + 20;
+ if (msg_len > pdu_len ||
+ ((options & PJ_STUN_IS_DATAGRAM) && msg_len != pdu_len))
+ {
+ return PJNATH_EINSTUNMSGLEN;
+ }
}
/* Create the message, copy the header, and convert to host byte order */
msg = PJ_POOL_ZALLOC_T(pool, pj_stun_msg);
pj_memcpy(&msg->hdr, pdu, sizeof(pj_stun_msg_hdr));
msg->hdr.type = pj_ntohs(msg->hdr.type);
msg->hdr.length = pj_ntohs(msg->hdr.length);
msg->hdr.magic = pj_ntohl(msg->hdr.magic);
pdu += sizeof(pj_stun_msg_hdr);
/* pdu_len -= sizeof(pj_stun_msg_hdr); */
pdu_len = msg->hdr.length;
/* No need to create response if this is not a request */
if (!PJ_STUN_IS_REQUEST(msg->hdr.type))
p_response = NULL;
/* Parse attributes */
- while (pdu_len >= 4) {
- unsigned attr_type, attr_val_len;
- const struct attr_desc *adesc;
+ while (pdu_len >= ATTR_HDR_LEN) {
+ unsigned attr_type, attr_val_len;
+ const struct attr_desc *adesc;
/* Get attribute type and length. If length is not aligned
* to 4 bytes boundary, add padding.
*/
attr_type = GETVAL16H(pdu, 0);
attr_val_len = GETVAL16H(pdu, 2);
attr_val_len = (attr_val_len + 3) & (~3);
- /* Check length */
- if (pdu_len < attr_val_len) {
- pj_str_t err_msg;
- char err_msg_buf[80];
+ /* Check length */
+ if (pdu_len < attr_val_len + ATTR_HDR_LEN) {
+ pj_str_t err_msg;
+ char err_msg_buf[80];
err_msg.ptr = err_msg_buf;
err_msg.slen = pj_ansi_snprintf(err_msg_buf, sizeof(err_msg_buf),
"Attribute %s has invalid length",
pj_stun_get_attr_name(attr_type));
PJ_LOG(4,(THIS_FILE, "Error decoding message: %.*s",
(int)err_msg.slen, err_msg.ptr));
if (p_response) {
--
2.41.0

View File

@@ -0,0 +1,81 @@
From 02d2273f085943b7d8daf7814d9b316216cae26b Mon Sep 17 00:00:00 2001
From: sauwming <ming@teluu.com>
Date: Fri, 23 Dec 2022 15:05:28 +0800
Subject: [PATCH 301/303] Merge pull request from GHSA-cxwq-5g9x-x7fr
* Fixed heap buffer overflow when parsing STUN errcode attribute
* Also fixed uint parsing
---
pjnath/src/pjnath/stun_msg.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/pjnath/src/pjnath/stun_msg.c b/pjnath/src/pjnath/stun_msg.c
index fd15230bc..d3aaae5bf 100644
--- a/pjnath/src/pjnath/stun_msg.c
+++ b/pjnath/src/pjnath/stun_msg.c
@@ -1432,26 +1432,26 @@ static pj_status_t decode_uint_attr(pj_pool_t *pool,
void **p_attr)
{
pj_stun_uint_attr *attr;
PJ_UNUSED_ARG(msghdr);
/* Create the attribute */
attr = PJ_POOL_ZALLOC_T(pool, pj_stun_uint_attr);
GETATTRHDR(buf, &attr->hdr);
- attr->value = GETVAL32H(buf, 4);
-
/* Check that the attribute length is valid */
if (attr->hdr.length != 4)
return PJNATH_ESTUNINATTRLEN;
+ attr->value = GETVAL32H(buf, 4);
+
/* Done */
*p_attr = attr;
return PJ_SUCCESS;
}
static pj_status_t encode_uint_attr(const void *a, pj_uint8_t *buf,
unsigned len,
const pj_stun_msg_hdr *msghdr,
@@ -1751,28 +1751,29 @@ static pj_status_t decode_errcode_attr(pj_pool_t *pool,
{
pj_stun_errcode_attr *attr;
pj_str_t value;
PJ_UNUSED_ARG(msghdr);
/* Create the attribute */
attr = PJ_POOL_ZALLOC_T(pool, pj_stun_errcode_attr);
GETATTRHDR(buf, &attr->hdr);
+ /* Check that the attribute length is valid */
+ if (attr->hdr.length < 4)
+ return PJNATH_ESTUNINATTRLEN;
+
attr->err_code = buf[6] * 100 + buf[7];
/* Get pointer to the string in the message */
value.ptr = ((char*)buf + ATTR_HDR_LEN + 4);
value.slen = attr->hdr.length - 4;
- /* Make sure the length is never negative */
- if (value.slen < 0)
- value.slen = 0;
/* Copy the string to the attribute */
pj_strdup(pool, &attr->reason, &value);
/* Done */
*p_attr = attr;
return PJ_SUCCESS;
}
--
2.41.0

View File

@@ -0,0 +1,166 @@
From 0a3af5f1a0f64fd30f35338b8328391283d88ecb Mon Sep 17 00:00:00 2001
From: Matthew Fredrickson <mfredrickson@fluentstream.com>
Date: Tue, 30 May 2023 04:33:05 -0500
Subject: [PATCH 302/303] Locking fix so that SSL_shutdown and SSL_write are
not called at same time (#3583)
---
pjlib/src/pj/ssl_sock_ossl.c | 82 ++++++++++++++++++++++--------------
1 file changed, 51 insertions(+), 31 deletions(-)
diff --git a/pjlib/src/pj/ssl_sock_ossl.c b/pjlib/src/pj/ssl_sock_ossl.c
index ed441e3e2..5c8e67b76 100644
--- a/pjlib/src/pj/ssl_sock_ossl.c
+++ b/pjlib/src/pj/ssl_sock_ossl.c
@@ -1627,44 +1627,58 @@ static void ssl_destroy(pj_ssl_sock_t *ssock)
/* Potentially shutdown OpenSSL library if this is the last
* context exists.
*/
shutdown_openssl();
}
/* Reset SSL socket state */
static void ssl_reset_sock_state(pj_ssl_sock_t *ssock)
{
+ int post_unlock_flush_circ_buf = 0;
+
ossl_sock_t *ossock = (ossl_sock_t *)ssock;
+ /* Must lock around SSL calls, particularly SSL_shutdown
+ * as it can modify the write BIOs and destructively
+ * interfere with any ssl_write() calls in progress
+ * above in a multithreaded environment */
+ pj_lock_acquire(ssock->write_mutex);
+
/* Detach from SSL instance */
if (ossock->ossl_ssl) {
SSL_set_ex_data(ossock->ossl_ssl, sslsock_idx, NULL);
}
/**
* Avoid calling SSL_shutdown() if handshake wasn't completed.
* OpenSSL 1.0.2f complains if SSL_shutdown() is called during an
* SSL handshake, while previous versions always return 0.
*/
if (ossock->ossl_ssl && SSL_in_init(ossock->ossl_ssl) == 0) {
- int ret = SSL_shutdown(ossock->ossl_ssl);
- if (ret == 0) {
- /* Flush data to send close notify. */
- flush_circ_buf_output(ssock, &ssock->shutdown_op_key, 0, 0);
- }
+ int ret = SSL_shutdown(ossock->ossl_ssl);
+ if (ret == 0) {
+ /* SSL_shutdown will potentially trigger a bunch of
+ * data to dump to the socket */
+ post_unlock_flush_circ_buf = 1;
+ }
}
- pj_lock_acquire(ssock->write_mutex);
ssock->ssl_state = SSL_STATE_NULL;
+
pj_lock_release(ssock->write_mutex);
+ if (post_unlock_flush_circ_buf) {
+ /* Flush data to send close notify. */
+ flush_circ_buf_output(ssock, &ssock->shutdown_op_key, 0, 0);
+ }
+
ssl_close_sockets(ssock);
/* Upon error, OpenSSL may leave any error description in the thread
* error queue, which sometime may cause next call to SSL API returning
* false error alarm, e.g: in Linux, SSL_CTX_use_certificate_chain_file()
* returning false error after a handshake error (in different SSL_CTX!).
* For now, just clear thread error queue here.
*/
ERR_clear_error();
}
@@ -2330,52 +2344,58 @@ static pj_status_t ssl_read(pj_ssl_sock_t *ssock, void *data, int *size)
{
ossl_sock_t *ossock = (ossl_sock_t *)ssock;
int size_ = *size;
int len = size_;
/* SSL_read() may write some data to write buffer when re-negotiation
* is on progress, so let's protect it with write mutex.
*/
pj_lock_acquire(ssock->write_mutex);
*size = size_ = SSL_read(ossock->ossl_ssl, data, size_);
- pj_lock_release(ssock->write_mutex);
if (size_ <= 0) {
pj_status_t status;
int err = SSL_get_error(ossock->ossl_ssl, size_);
- /* SSL might just return SSL_ERROR_WANT_READ in
- * re-negotiation.
- */
- if (err != SSL_ERROR_NONE && err != SSL_ERROR_WANT_READ) {
- if (err == SSL_ERROR_SYSCALL && size_ == -1 &&
- ERR_peek_error() == 0 && errno == 0)
- {
- status = STATUS_FROM_SSL_ERR2("Read", ssock, size_,
- err, len);
- PJ_LOG(4,("SSL", "SSL_read() = -1, with "
- "SSL_ERROR_SYSCALL, no SSL error, "
- "and errno = 0 - skip BIO error"));
- /* Ignore these errors */
- } else {
- /* Reset SSL socket state, then return PJ_FALSE */
- status = STATUS_FROM_SSL_ERR2("Read", ssock, size_,
- err, len);
- ssl_reset_sock_state(ssock);
- return status;
- }
- }
-
- /* Need renegotiation */
- return PJ_EEOF;
+ /* SSL might just return SSL_ERROR_WANT_READ in
+ * re-negotiation.
+ */
+ if (err != SSL_ERROR_NONE && err != SSL_ERROR_WANT_READ) {
+ if (err == SSL_ERROR_SYSCALL && size_ == -1 &&
+ ERR_peek_error() == 0 && errno == 0)
+ {
+ status = STATUS_FROM_SSL_ERR2("Read", ssock, size_,
+ err, len);
+ PJ_LOG(4,("SSL", "SSL_read() = -1, with "
+ "SSL_ERROR_SYSCALL, no SSL error, "
+ "and errno = 0 - skip BIO error"));
+ /* Ignore these errors */
+ } else {
+ /* Reset SSL socket state, then return PJ_FALSE */
+ status = STATUS_FROM_SSL_ERR2("Read", ssock, size_,
+ err, len);
+ pj_lock_release(ssock->write_mutex);
+ /* Unfortunately we can't hold the lock here to reset all the state.
+ * We probably should though.
+ */
+ ssl_reset_sock_state(ssock);
+ return status;
+ }
+ }
+
+ pj_lock_release(ssock->write_mutex);
+ /* Need renegotiation */
+ return PJ_EEOF;
}
+ pj_lock_release(ssock->write_mutex);
+
return PJ_SUCCESS;
}
/* Write plain data to SSL and flush write BIO. */
static pj_status_t ssl_write(pj_ssl_sock_t *ssock, const void *data,
pj_ssize_t size, int *nwritten)
{
ossl_sock_t *ossock = (ossl_sock_t *)ssock;
pj_status_t status = PJ_SUCCESS;
--
2.41.0

View File

@@ -0,0 +1,123 @@
From 0f7267f220be79e21cf9f96efa01929285e9aa55 Mon Sep 17 00:00:00 2001
From: Riza Sulistyo <trengginas@users.noreply.github.com>
Date: Wed, 5 Jul 2023 10:38:21 +0700
Subject: [PATCH 303/303] Don't call SSL_shutdown() when receiving
SSL_ERROR_SYSCALL or SSL_ERROR_SSL (#3577)
---
pjlib/src/pj/ssl_sock_imp_common.c | 1 +
pjlib/src/pj/ssl_sock_imp_common.h | 13 +++++++------
pjlib/src/pj/ssl_sock_ossl.c | 17 ++++++++++++-----
3 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/pjlib/src/pj/ssl_sock_imp_common.c b/pjlib/src/pj/ssl_sock_imp_common.c
index ae2f1136e..c825676c3 100644
--- a/pjlib/src/pj/ssl_sock_imp_common.c
+++ b/pjlib/src/pj/ssl_sock_imp_common.c
@@ -237,20 +237,21 @@ static void ssl_close_sockets(pj_ssl_sock_t *ssock)
#endif
/* When handshake completed:
* - notify application
* - if handshake failed, reset SSL state
* - return PJ_FALSE when SSL socket instance is destroyed by application.
*/
static pj_bool_t on_handshake_complete(pj_ssl_sock_t *ssock,
pj_status_t status)
{
+ ssock->handshake_status = status;
/* Cancel handshake timer */
if (ssock->timer.id == TIMER_HANDSHAKE_TIMEOUT) {
pj_timer_heap_cancel(ssock->param.timer_heap, &ssock->timer);
ssock->timer.id = TIMER_NONE;
}
/* Update certificates info on successful handshake */
if (status == PJ_SUCCESS)
ssl_update_certs_info(ssock);
diff --git a/pjlib/src/pj/ssl_sock_imp_common.h b/pjlib/src/pj/ssl_sock_imp_common.h
index cba28dbd3..8a63faa90 100644
--- a/pjlib/src/pj/ssl_sock_imp_common.h
+++ b/pjlib/src/pj/ssl_sock_imp_common.h
@@ -99,26 +99,27 @@ struct pj_ssl_sock_t
* information allocation. Don't use for
* other purposes. */
pj_ssl_sock_t *parent;
pj_ssl_sock_param param;
pj_ssl_sock_param newsock_param;
pj_ssl_cert_t *cert;
pj_ssl_cert_info local_cert_info;
pj_ssl_cert_info remote_cert_info;
- pj_bool_t is_server;
- enum ssl_state ssl_state;
- pj_ioqueue_op_key_t handshake_op_key;
- pj_ioqueue_op_key_t shutdown_op_key;
- pj_timer_entry timer;
- pj_status_t verify_status;
+ pj_bool_t is_server;
+ enum ssl_state ssl_state;
+ pj_ioqueue_op_key_t handshake_op_key;
+ pj_ioqueue_op_key_t shutdown_op_key;
+ pj_timer_entry timer;
+ pj_status_t verify_status;
+ pj_status_t handshake_status;
pj_bool_t is_closing;
unsigned long last_err;
pj_sock_t sock;
pj_activesock_t *asock;
pj_sockaddr local_addr;
pj_sockaddr rem_addr;
int addr_len;
diff --git a/pjlib/src/pj/ssl_sock_ossl.c b/pjlib/src/pj/ssl_sock_ossl.c
index 5c8e67b76..8a717e362 100644
--- a/pjlib/src/pj/ssl_sock_ossl.c
+++ b/pjlib/src/pj/ssl_sock_ossl.c
@@ -1646,27 +1646,34 @@ static void ssl_reset_sock_state(pj_ssl_sock_t *ssock)
/* Detach from SSL instance */
if (ossock->ossl_ssl) {
SSL_set_ex_data(ossock->ossl_ssl, sslsock_idx, NULL);
}
/**
* Avoid calling SSL_shutdown() if handshake wasn't completed.
* OpenSSL 1.0.2f complains if SSL_shutdown() is called during an
* SSL handshake, while previous versions always return 0.
+ * Call SSL_shutdown() when there is a timeout handshake failure or
+ * the last error is not SSL_ERROR_SYSCALL and not SSL_ERROR_SSL.
*/
if (ossock->ossl_ssl && SSL_in_init(ossock->ossl_ssl) == 0) {
- int ret = SSL_shutdown(ossock->ossl_ssl);
- if (ret == 0) {
- /* SSL_shutdown will potentially trigger a bunch of
- * data to dump to the socket */
- post_unlock_flush_circ_buf = 1;
+ if (ssock->handshake_status == PJ_ETIMEDOUT ||
+ (ssock->last_err != SSL_ERROR_SYSCALL &&
+ ssock->last_err != SSL_ERROR_SSL))
+ {
+ int ret = SSL_shutdown(ossock->ossl_ssl);
+ if (ret == 0) {
+ /* SSL_shutdown will potentially trigger a bunch of
+ * data to dump to the socket */
+ post_unlock_flush_circ_buf = 1;
+ }
}
}
ssock->ssl_state = SSL_STATE_NULL;
pj_lock_release(ssock->write_mutex);
if (post_unlock_flush_circ_buf) {
/* Flush data to send close notify. */
flush_circ_buf_output(ssock, &ssock->shutdown_op_key, 0, 0);
--
2.41.0