Skip to content

Commit 5d3450b

Browse files
mkeskellsMike Skells
andauthored
trivial changes (#322)
Co-authored-by: Mike Skells <[email protected]>
1 parent 87de5b9 commit 5d3450b

File tree

9 files changed

+12
-14
lines changed

9 files changed

+12
-14
lines changed

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ GitHub provides additional document on [forking a repository](https://help.githu
4444

4545
All connectors for Apache Kafka in this repository are open source products released under the Apache 2.0 license (see either [the Apache site](https://www.apache.org/licenses/LICENSE-2.0) or the [LICENSE.txt file](LICENSE.txt)). The Apache 2.0 license allows you to freely use, modify, distribute, and sell your own products that include Apache 2.0 licensed software.
4646

47-
We respect intellectual property rights of others and we want to make sure all incoming contributions are correctly attributed and licensed. A Developer Certificate of Origin (DCO) is a lightweight mechanism to do that.
47+
We respect intellectual property rights of others, and we want to make sure all incoming contributions are correctly attributed and licensed. A Developer Certificate of Origin (DCO) is a lightweight mechanism to do that.
4848

4949
So we require by making a contribution every contributor certifies that:
5050
```
@@ -62,7 +62,7 @@ For more information see the [Code of Conduct FAQ](https://www.contributor-coven
6262

6363

6464
## Security issue notifications
65-
If you discover a potential security issue in this project we ask that you report it according to [Security Policy](SECURITY.md). Please do **not** create a public github issue.
65+
If you discover a potential security issue in this project we ask that you report it according to [Security Policy](SECURITY.md). Please do **not** create a public GitHub issue.
6666

6767
## Licensing
6868

commons/src/main/java/io/aiven/kafka/connect/common/grouper/KeyAndTopicPartitionRecordGrouper.java

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,10 +66,8 @@ public void put(final SinkRecord record) {
6666

6767
final String recordKey = generateRecordKey(record);
6868

69-
fileBuffers.putIfAbsent(recordKey, new ArrayList<>());
70-
69+
final List<SinkRecord> records = fileBuffers.computeIfAbsent(recordKey, ignored -> new ArrayList<>(1));
7170
// one record per file
72-
final List<SinkRecord> records = fileBuffers.get(recordKey);
7371
records.clear();
7472
records.add(record);
7573
}

commons/src/main/java/io/aiven/kafka/connect/common/grouper/RecordGrouper.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ public interface RecordGrouper {
4141
/**
4242
* Get all records associated with files, grouped by the file name.
4343
*
44-
* @return map of records assotiated with files
44+
* @return map of records associated with files
4545
*/
4646
Map<String, List<SinkRecord>> records();
4747

gcs-sink-connector/src/main/java/io/aiven/kafka/connect/gcs/GoogleCredentialsBuilder.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ private GoogleCredentialsBuilder() {
4242
* non-{@code null}), this is an error.
4343
*
4444
* <p>
45-
* If either @code credentialsPath} or {@code credentialsJson} is provided, it's used to construct the credentials.
45+
* If either {@code credentialsPath} or {@code credentialsJson} is provided, it's used to construct the credentials.
4646
*
4747
* <p>
4848
* If none are provided, the default GCP SDK credentials acquisition mechanism is used.

gcs-sink-connector/src/test/java/io/aiven/kafka/connect/common/grouper/GcsSinkTaskGroupByKeyPropertiesTest.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@
4444
* <a href="https://jqwik.net/docs/current/user-guide.html">jqwik</a>.
4545
*
4646
* <p>
47-
* The idea is to generate random batches of {@link SinkRecord} (see {@link PbtBase#recordBatches()}, put them into a
47+
* The idea is to generate random batches of {@link SinkRecord} (see {@link PbtBase#recordBatches()}), put them into a
4848
* task, and check certain properties of the written files afterwards. Files are written virtually using the in-memory
4949
* GCS mock.
5050
*/

gcs-sink-connector/src/test/java/io/aiven/kafka/connect/common/grouper/GcsSinkTaskGroupByTopicPartitionPropertiesTest.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
* <a href="https://jqwik.net/docs/current/user-guide.html">jqwik</a>.
5454
*
5555
* <p>
56-
* The idea is to generate random batches of {@link SinkRecord} (see {@link PbtBase#recordBatches()}, put them into a
56+
* The idea is to generate random batches of {@link SinkRecord} (see {@link PbtBase#recordBatches()}), put them into a
5757
* task, and check certain properties of the written files afterwards. Files are written virtually using the in-memory
5858
* GCS mock.
5959
*/

gcs-sink-connector/src/test/java/io/aiven/kafka/connect/gcs/GcsSinkTaskTest.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,8 +123,8 @@ private Map<String, Collection<Record>> toBlobNameWithRecordsMap(final String co
123123
for (final Record record : records) {
124124
final int offset = topicPartitionMinimumOffset.get(record.topic + "-" + record.partition);
125125
final String key = record.topic + "-" + record.partition + "-" + offset + extension;
126-
blobNameWithRecordsMap.putIfAbsent(key, new ArrayList<>()); // NOPMD
127-
blobNameWithRecordsMap.get(key).add(record);
126+
blobNameWithRecordsMap.computeIfAbsent(key, ignored -> new ArrayList<>()) // NOPMD
127+
.add(record);
128128
}
129129
return blobNameWithRecordsMap;
130130
}

gcs-sink-connector/src/test/java/io/aiven/kafka/connect/gcs/config/GcsSinkConfigTest.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -224,11 +224,11 @@ void wrongRetryPolicySettings() {
224224
"Invalid value -1 for configuration gcs.retry.backoff.total.timeout.ms: " + "Value must be at least 0",
225225
totalTimeoutE.getMessage());
226226

227-
final var tooBigTotoalTimeoutProp = Map.of("gcs.bucket.name", "test-bucket",
227+
final var tooBigTotalTimeoutProp = Map.of("gcs.bucket.name", "test-bucket",
228228
"gcs.retry.backoff.total.timeout.ms", String.valueOf(TimeUnit.HOURS.toMillis(25)));
229229

230230
final var tooBigTotalTimeoutE = assertThrows(ConfigException.class,
231-
() -> new GcsSinkConfig(tooBigTotoalTimeoutProp));
231+
() -> new GcsSinkConfig(tooBigTotalTimeoutProp));
232232
assertEquals("Invalid value 90000000 for configuration gcs.retry.backoff.total.timeout.ms: "
233233
+ "Value must be no more than 86400000 (24 hours)", tooBigTotalTimeoutE.getMessage());
234234
}

gcs-sink-connector/src/test/java/io/aiven/kafka/connect/gcs/config/GcsSinkCredentialsConfigTest.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ void gcsCredentialsDefault() {
141141
final GcsSinkConfig config = new GcsSinkConfig(properties);
142142

143143
// Note that we're using a mock here since the Google credentials are not part of the environment when running
144-
// in github actions. It's better to use a mock here and make the test self-contained than it is to make things
144+
// in GitHub actions. It's better to use a mock here and make the test self-contained than it is to make things
145145
// more complicated and making it rely on the environment it's executing within.
146146
try (MockedStatic<GoogleCredentials> mocked = mockStatic(GoogleCredentials.class)) {
147147
final GoogleCredentials googleCredentials = mock(GoogleCredentials.class);

0 commit comments

Comments
 (0)