{"id":7466,"date":"2024-01-18T07:00:01","date_gmt":"2024-01-18T15:00:01","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/cosmosdb\/?p=7466"},"modified":"2024-08-08T07:34:40","modified_gmt":"2024-08-08T14:34:40","slug":"latest-nosql-java-ecosystem-updates-2023-q3-q4","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/cosmosdb\/latest-nosql-java-ecosystem-updates-2023-q3-q4\/","title":{"rendered":"Latest NoSQL Java Ecosystem Updates 2023 Q3 &amp; Q4"},"content":{"rendered":"<p>We&#8217;re always busy adding new features, fixes, patches, and improvements to our <a href=\"https:\/\/devblogs.microsoft.com\/cosmosdb\/azure-cosmos-db-java-ecosystem\/\" target=\"_blank\" rel=\"noopener\">Java-based client libraries for Azure Cosmos DB for NoSQL<\/a>. In this regular blog series, we share highlights of recent updates in the last period.<\/p>\n<p>&nbsp;<\/p>\n<h2>July &#8211; December 2023 updates<\/h2>\n<p>&nbsp;<\/p>\n<ol>\n<li><a href=\"#spark-3-4-support\">Spark 3.4 Support<\/a><\/li>\n<li style=\"text-align: left;\"><a href=\"#throughput-control-gateway-mode-support-in-spark-connector\">Throughput Control &#8211; gateway support in Spark Connector<\/a><\/li>\n<li><a href=\"#aggressive-connection-warmup-improvements-in-java-sdk\">Aggressive Connection Warmup Improvements in Java SDK<\/a><\/li>\n<li><a href=\"#query-pagination-improvements-in-java-sdk\">Query Pagination Improvements in Java SDK<\/a><\/li>\n<li><a href=\"#patch-operation-on-more-than-10-fields-in-spark-connector\">Patch Operation on more than 10 fields in Spark Connector<\/a><\/li>\n<li><a href=\"#bypass-integrated-cache-in-java-sdk\">Bypass Integrated Cache in Java SDK<\/a><\/li>\n<li><a href=\"#diagnostics-thresholds-support-for-java-sdk-and-spring-data\">Diagnostics Thresholds support for Java SDK and Spring Data<\/a><\/li>\n<li><a href=\"#integration-of-throughput-control-with-change-feed-processor-java-sdk\">Integration of Throughput Control with Change Feed Processor &#8211; Java SDK<\/a><\/li>\n<li><a href=\"#session-token-mismatch-optimization\">Session token mismatch\u00a0 &#8211; further optimization<\/a><\/li>\n<li><a href=\"#change-feed-processor-context-for-all-versions-and-deletes-mode-in-java-sdk\">Change Feed Processor Context for All Versions and Deletes mode in Java SDK<\/a><\/li>\n<li><a href=\"#hierarchical-partition-key-support-in-spark-connector\">Hierarchical Partition Key Support in Spark Connector<\/a><\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<h5>Spark 3.4 Support<\/h5>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/cosmosdb\/azure-cosmos-db-java-ecosystem\/#cloud-native-hybrid-transactional-and-analytical-processing-htap\" target=\"_blank\" rel=\"noopener\">Cloud-native hybrid transactional and analytical processing (HTAP)<\/a>\u00a0is supported in Azure Cosmos DB through\u00a0<a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/synapse-link\" target=\"_blank\" rel=\"noopener\">Synapse link<\/a>, using OLTP and OLAP Spark connectors, which now support Spark 3.4 as of July 2023. This includes support for <a href=\"https:\/\/spark.apache.org\/docs\/3.4.2\/api\/java\/org\/apache\/spark\/sql\/types\/TimestampNTZType.html\" target=\"_blank\" rel=\"noopener\">TimestampNTZType<\/a> introduced in Spark 3.4.<\/p>\n<p>&nbsp;<\/p>\n<h5>Throughput Control \u2013 Gateway Mode Support in Spark Connector<\/h5>\n<p>In the <a href=\"https:\/\/aka.ms\/JavaCosmosSparkConnectorDocs\">Cosmos DB Spark Connector<\/a>, throughput control is a feature which helps to isolate the performance needs of applications running against a container, by limiting the amount of <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/cosmos-db\/request-units\" data-linktype=\"relative-path\">request units<\/a> that can be consumed by a given Spark client. In September 2023 we added <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/pull\/36687\" target=\"_blank\" rel=\"noopener\">support for gateway mode<\/a>, users can now enable this by setting <code class=\"notranslate\">spark.cosmos.useGatewayMode<\/code> to <strong>true <\/strong>in Spark config. Find out more about throughput control <a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/nosql\/throughput-control-spark\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<h5>Aggressive Connection Warmup Improvements in Java SDK<\/h5>\n<p>Earlier in February 2023 we introduced <a href=\"https:\/\/devblogs.microsoft.com\/cosmosdb\/latest-nosql-java-ecosystem-updates-2023-q1-q2\/#java-sdk-proactive-connection-management\" target=\"_blank\" rel=\"noopener\">Proactive Connection Management<\/a>, a feature which allows developers to warm up connections and caches for containers in both the current read region and a pre-defined number of preferred remote regions. This feature can improve tail latency in cross-region failover scenarios. In September we have made <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/pull\/36889\" target=\"_blank\" rel=\"noopener\">enhancements and improved the efficiency<\/a> of the way connections are opened during the warm up phase.<\/p>\n<p>&nbsp;<\/p>\n<h5>Query Pagination Improvements in Java SDK<\/h5>\n<p>We&#8217;ve made <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/pull\/36847\" target=\"_blank\" rel=\"noopener\">enhancements\u00a0to pagination<\/a> in September 2023.<\/p>\n<p>&nbsp;<\/p>\n<h5>Patch Operation on more than 10 fields in Spark Connector<\/h5>\n<p>For Patch API in Cosmos DB, there is a general limitation of only patching documents up to 10 fields at a time (see <a href=\"https:\/\/nam06.safelinks.protection.outlook.com\/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fcosmos-db%2Fpartial-document-update%23supported-modes&amp;data=05%7C01%7CTheo.van%40microsoft.com%7C5b2354afe9c2461c3e6b08dba5bb405c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638285995958569851%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=buq7jzuwKm%2FSu220NEWS7s10TaAQDWqM5sRkgjmqy2Q%3D&amp;reserved=0\">Partial document update &#8211; Azure Cosmos DB for NoSQL<\/a> for more information). In the Spark OLTP connector we have developed a new feature to allow customers to patch their documents having more than 10 columns. This resolves a big restriction on most of the customers who use spark for bulk ingestion and updating their documents. By setting <code class=\"notranslate\">spark.cosmos.write.strategy<\/code> to\u00a0 <code class=\"notranslate\">ItemBulkUpdate<\/code>, users can now patch more than 10 items in a single operation. To find samples for doing patch using Cosmos DB Spark Connector, see <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/tree\/main\/sdk\/cosmos\/azure-cosmos-spark_3_2-12\/Samples\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<h5>Bypass Integrated Cache in Java SDK<\/h5>\n<p>The Azure Cosmos DB <a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/integrated-cache\" target=\"_blank\" rel=\"noopener\">integrated cache<\/a> is an in-memory cache that helps ensure manageable costs and low latency as request volume grows. The integrated cache is a read-through, write-through cache with a Least Recently Used (LRU) eviction policy. However, there are some scenarios where it is preferable to avoid using the cache on a per request basis:<\/p>\n<pre class=\"prettyprint language-java\"><code class=\"language-java\">DedicatedGatewayRequestOptions dedicatedOptions = new DedicatedGatewayRequestOptions(); \r\ndedicatedOptions.setMaxIntegratedCacheStaleness(Duration.ofMinutes(2)); \r\ndedicatedOptions.setIntegratedCacheBypassed(true); \r\nCosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions(); \r\nqueryOptions.setDedicatedGatewayRequestOptions(dedicatedOptions);<\/code><\/pre>\n<h5><\/h5>\n<h5>Diagnostics Thresholds support for Java SDK and Spring Data<\/h5>\n<p>Back in September 2022 we introduced the <a href=\"https:\/\/devblogs.microsoft.com\/cosmosdb\/latest-nosql-java-ecosystem-updates-2022-q3-q4#client-side-metrics-via-micrometer-io-meter-registry\" target=\"_blank\" rel=\"noopener\">option to emit client metrics<\/a> from the Azure Cosmos DB Java SDK via <a href=\"https:\/\/micrometer.io\/\" target=\"_blank\" rel=\"noopener\">micrometer MeterRegistry<\/a> as well as doing so from the Spark connector via configuration. We have since added the ability to define <strong>thresholds<\/strong>, which for very noisy applications will help to limit metrics consumed to the ones you are most interested in. Check out documentation on <a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/nosql\/client-metrics-java\" target=\"_blank\" rel=\"noopener\">instrumenting client metrics with Micrometer using Prometheus<\/a>, which includes examples on how to define thresholds. We also recently enabled threshold support in the <a href=\"https:\/\/aka.ms\/SpringDataCosmos\" target=\"_blank\" rel=\"noopener\">Cosmos Spring Data Client Library<\/a>, just configure your <code class=\"language-default\">CosmosClientBuilder<\/code> bean as below:<\/p>\n<pre class=\"prettyprint language-default\"><code class=\"language-default\">@Bean\r\npublic CosmosClientBuilder cosmosClientBuilder() {\r\n    DirectConnectionConfig directConnectionConfig = DirectConnectionConfig.getDefaultConfig();\r\n    return new CosmosClientBuilder()\r\n            .endpoint(properties.getUri())\r\n            .key(properties.getKey())\r\n            .directMode(directConnectionConfig)\r\n            .clientTelemetryConfig(\r\n                    new CosmosClientTelemetryConfig()\r\n                            .diagnosticsThresholds(\r\n                                    new CosmosDiagnosticsThresholds()\r\n                                                .setNonPointOperationLatencyThreshold(Duration.ofMillis(nonPointOperationLatencyThresholdInMS))\r\n                                                .setPointOperationLatencyThreshold(Duration.ofMillis(pointOperationLatencyThresholdInMS))\r\n                                                .setPayloadSizeThreshold(payloadSizeThresholdInBytes)\r\n                                                .setRequestChargeThreshold(requestChargeThresholdInRU)\r\n                                )\r\n                                .diagnosticsHandler(CosmosDiagnosticsHandler.DEFAULT_LOGGING_HANDLER));\r\n    }<\/code><\/pre>\n<h5><\/h5>\n<h5><\/h5>\n<h5>Integration of Throughput Control with Change Feed Processor &#8211; Java SDK<\/h5>\n<p>During a heavy backlog of changes in the monitoring container, the <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/cosmos-db\/nosql\/change-feed-processor?tabs=java#implement-the-change-feed-processor\" target=\"_blank\" rel=\"noopener\">Change Feed Processor<\/a> (CFP) will keep polling documents in order to catch up. This can cause an increase in <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/cosmos-db\/request-units\" data-linktype=\"relative-path\">request unit<\/a> usage, and in turn cause heavy throttling. To avoid this, we have integrated Throughput Control with CFP. In Java SDK, <a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/priority-based-execution?tabs=java-v4\" target=\"_blank\" rel=\"noopener\">Priority Based Execution<\/a> is already integrated into Throughput Control, making it easier for customers to set low priority for CFP based workload to avoid the impact of throttling on other workloads running in parallel. Here&#8217;s a sample of how to define both throughput control and a priority level when processing change feed:<\/p>\n<pre class=\"prettyprint language-java\"><code class=\"language-java\">ThroughputControlGroupConfig throughputControlGroupConfig =\r\n        new ThroughputControlGroupConfigBuilder()\r\n                .groupName(\"changeFeedProcessor\")\r\n                .targetThroughput(1000)\r\n                .priorityLevel(PriorityLevel.LOW)\r\n                .build();\r\noptions.setFeedPollThroughputControlConfig(throughputControlGroupConfig);\r\n\r\nChangeFeedProcessor changeFeedProcessorInstance = new ChangeFeedProcessorBuilder()\r\n        .hostName(\"SampleHost_1\")\r\n        .feedContainer(feedContainer)\r\n        .leaseContainer(leaseContainer)\r\n        .options(options)\r\n        .handleChanges(handleChanges())\r\n        .buildChangeFeedProcessor();\r\nchangeFeedProcessorInstance.start()\r\n        .subscribeOn(Schedulers.boundedElastic())\r\n        .subscribe();<\/code><\/pre>\n<pre class=\"prettyprint language-java\"><\/pre>\n<h5><\/h5>\n<h5 id=\"session-token-mismatch-optimization\">Session token mismatch\u00a0 &#8211; further optimization<\/h5>\n<p>In the previous period we introduced a <a href=\"https:\/\/devblogs.microsoft.com\/cosmosdb\/latest-nosql-java-ecosystem-updates-2023-q1-q2\/#session-token-mismatch-optimization\" target=\"_blank\" rel=\"noopener\">session token mismatch optimisation<\/a> which allows application developers to configure hints through a\u00a0<code class=\" prettyprinted\"><span class=\"ui-provider fz b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak\" dir=\"ltr\"><span class=\"typ\">SessionRetryOptions<\/span><\/span><span class=\"pln\">\u00a0<\/span><\/code>instance. This signals to the SDK whether to pin retries on the local region or move quicker to a remote region, especially when <code class=\"notranslate prettyprinted\"><span class=\"pln\">READ_SESSION_NOT_AVAILABLE<\/span><\/code> errors are thrown. The <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/pull\/37143\" target=\"_blank\" rel=\"noopener\">latest change<\/a> adds a minimum time spent in trying to see whether the local time can meet session consistency before attempting to call the remote region. This will add some time the local region can use to catch-up on replication lag.<\/p>\n<pre class=\"prettyprint language-java\"><code class=\"language-java\">int minMaxRetriesInLocalRegion= 5;\r\nSystem.setProperty(\"COSMOS.MIN_MAX_RETRIES_IN_LOCAL_REGION_WHEN_REMOTE_REGION_PREFERRED\", String.valueOf(minMaxRetriesInLocalRegion));<\/code><\/pre>\n<h5><\/h5>\n<h5><\/h5>\n<h5>Change Feed Processor Context for All Versions and Deletes mode in Java SDK<\/h5>\n<p>The Change Feed Processor (CFP) has a special mode for <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/cosmos-db\/nosql\/change-feed-modes?tabs=latest-version#all-versions-and-deletes-change-feed-mode-preview\" target=\"_blank\" rel=\"noopener\">All Versions and Deletes<\/a>, giving the user a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads, as well as a record of all deletes and the prior image before deletion. This <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/pull\/36715\" target=\"_blank\" rel=\"noopener\">change in the Java SDK<\/a> adds\u00a0<bdi class=\"js-issue-title markdown-title\"><code>ChangeFeedProcessorContext.<\/code>This exposes information on details related to a batch of changes.\u00a0\u00a0<\/bdi><\/p>\n<pre class=\"prettyprint language-java\"><code class=\"language-java\">public static ChangeFeedProcessor getChangeFeedProcessorForAllVersionsAndDeletesMode(String hostName, CosmosAsyncContainer feedContainer, CosmosAsyncContainer leaseContainer) {\r\n    return new ChangeFeedProcessorBuilder()\r\n            .hostName(hostName)\r\n            .options(options)\r\n            .feedContainer(feedContainer)\r\n            .leaseContainer(leaseContainer)\r\n            .handleAllVersionsAndDeletesChanges((docs, context) -&gt; {\r\n                for (ChangeFeedProcessorItem item : docs) {\r\n                    String leaseToken = context.getLeaseToken();\r\n                    \/\/ Handling of the lease token corresponding to a batch of change feed processor item goes here\r\n                }\r\n            })\r\n            .buildChangeFeedProcessor();\r\n}<\/code><\/pre>\n<h5><\/h5>\n<h5>Hierarchical Partition Key Support in Spark Connector<\/h5>\n<p>Users can now use the Spark Connector to create containers with <a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/hierarchical-partition-keys\" target=\"_blank\" rel=\"noopener\">hierarchical partition keys<\/a> in Azure Cosmos DB. In this PySpark sample we create a new container with hierarchical partition keys, ingest some data, then query using the first two levels in the hierarchy:<\/p>\n<pre class=\"prettyprint language-py\"><code class=\"language-py\">from pyspark.sql.types import StringType\r\nfrom pyspark.sql.functions import udf\r\n\r\ncosmosEndpoint = \"https:\/\/REPLACEME.documents.azure.com:443\/\"\r\ncosmosMasterKey = \"REPLACEME\"\r\n\r\n# Configure Catalog Api to be used\r\nspark.conf.set(\"spark.sql.catalog.cosmosCatalog\", \"com.azure.cosmos.spark.CosmosCatalog\")\r\nspark.conf.set(\"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint\", cosmosEndpoint)\r\nspark.conf.set(\"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey\", cosmosMasterKey)\r\n\r\n# create an Azure Cosmos DB container with hierarchical partitioning using catalog api\r\ncosmosDatabaseName = \"Database\"\r\ncosmosHierarchicalContainerName = \"HierarchicalPartitionKeyContainer\"\r\nspark.sql(\"CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '\/tenantId,\/userId,\/sessionId', manualThroughput = '1100')\".format(cosmosDatabaseName, cosmosHierarchicalContainerName))\r\n\r\ncfg = {\r\n  \"spark.cosmos.accountEndpoint\" : cosmosEndpoint,\r\n  \"spark.cosmos.accountKey\" : cosmosMasterKey,\r\n  \"spark.cosmos.database\" : cosmosDatabaseName,\r\n  \"spark.cosmos.container\" : cosmosHierarchicalContainerName,\r\n  \"spark.cosmos.read.partitioning.strategy\" : \"Restrictive\"\r\n}\r\n\r\n#ingest some data\r\nspark.createDataFrame(((\"id1\", \"tenant 1\", \"User 1\", \"session 1\"), (\"id2\", \"tenant 1\", \"User 1\", \"session 1\"), (\"id3\", \"tenant 2\", \"User 1\", \"session 1\"))) \\\r\n  .toDF(\"id\",\"tenantId\",\"userId\",\"sessionId\") \\\r\n   .write \\\r\n   .format(\"cosmos.oltp\") \\\r\n   .options(**cfg) \\\r\n   .mode(\"APPEND\") \\\r\n   .save()\r\n\r\n#query by filtering the first two levels in the hierarchy without feedRangeFilter - this is less efficient as it will go through all physical partitions\r\nquery_df = spark.read.format(\"cosmos.oltp\").options(**cfg) \\\r\n.option(\"spark.cosmos.read.customQuery\" , \"SELECT * from c where c.tenantId = 'tenant 1' and c.userId = 'User 1'\").load()\r\nquery_df.show()\r\n\r\n# prepare feed range to filter on first two levels in the hierarchy\r\nspark.udf.registerJavaFunction(\"GetFeedRangeForPartitionKey\", \"com.azure.cosmos.spark.udf.GetFeedRangeForHierarchicalPartitionKeyValues\", StringType())\r\npkDefinition = \"{\\\"paths\\\":[\\\"\/tenantId\\\",\\\"\/userId\\\",\\\"\/sessionId\\\"],\\\"kind\\\":\\\"MultiHash\\\"}\"\r\npkValues = \"[\\\"tenant 1\\\", \\\"User 1\\\"]\"\r\nfeedRangeDf = spark.sql(f\"SELECT GetFeedRangeForPartitionKey('{pkDefinition}', '{pkValues}')\")\r\nfeedRange = feedRangeDf.collect()[0][0]\r\n\r\n# query by filtering the first two levels in the hierarchy using feedRangeFilter (will target the physical partition in which all sub-partitions are co-located)\r\nquery_df = spark.read.format(\"cosmos.oltp\").options(**cfg).option(\"spark.cosmos.partitioning.feedRangeFilter\",feedRange).load()\r\nquery_df.show()<\/code><\/pre>\n<h5><\/h5>\n<h5><\/h5>\n<h5>Fixes, patches, and enhancements<\/h5>\n<p>In addition to all of the above features, we have of course made a large number of smaller bug fixes, security patches, enhancements, and improvements. You can track all the changes for each client library, along with the <strong>minimum version we recommend you use<\/strong>, by viewing the change logs:<\/p>\n<ul>\n<li><strong>Java SDK <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/blob\/main\/sdk\/cosmos\/azure-cosmos\/CHANGELOG.md\" target=\"_blank\" rel=\"noopener\">change log<\/a><\/strong><\/li>\n<li><strong>Spring Data Client Library <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/blob\/main\/sdk\/spring\/azure-spring-data-cosmos\/CHANGELOG.md\" target=\"_blank\" rel=\"noopener\">change log<\/a><\/strong><\/li>\n<li><strong>OLTP Spark Connector <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/blob\/main\/sdk\/cosmos\/azure-cosmos-spark_3-3_2-12\/CHANGELOG.md\" target=\"_blank\" rel=\"noopener\">change log<\/a><\/strong><\/li>\n<li><strong>Kafka Connectors <a href=\"https:\/\/github.com\/microsoft\/kafka-connect-cosmosdb\/blob\/dev\/CHANGELOG.md\" target=\"_blank\" rel=\"noopener\">change log<\/a><\/strong><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3 id=\"get-started\">Get Started with Java in Azure Cosmos DB<i class=\"fabric-icon fabric-icon--Link\" aria-hidden=\"true\"><\/i><\/h3>\n<ul>\n<li><a href=\"https:\/\/docs.microsoft.com\/azure\/cosmos-db\/sql\/sql-api-sdk-java-v4\" target=\"_blank\" rel=\"noopener\">Azure Cosmos DB Java SDK v4 technical documentation<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/azure\/cosmos-db\/nosql\/troubleshoot-java-sdk-v4?tabs=sync\" target=\"_blank\" rel=\"noopener\">Diagnose and troubleshoot Azure Cosmos DB Java SDK v4<\/a><\/li>\n<li><a href=\"https:\/\/docs.microsoft.com\/azure\/cosmos-db\/sql\/sql-api-java-sdk-samples\" target=\"_blank\" rel=\"noopener\">Azure Cosmos DB Java SDK v4 getting started sample application<\/a><\/li>\n<li><a href=\"https:\/\/aka.ms\/CosmosJavaSDKSamples\" target=\"_blank\" rel=\"noopener\">Java V4 SDK comprehensive samples repository<\/a><\/li>\n<li><a href=\"https:\/\/aka.ms\/CosmosSpringDataSamples\" target=\"_blank\" rel=\"noopener\">Azure Cosmos DB Spring Data Client Library Samples<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/AzureCosmosDB\/CosmicWorksJava\" target=\"_blank\" rel=\"noopener\">Cosmic Works Java<\/a><\/li>\n<li class=\"\"><a href=\"https:\/\/docs.microsoft.com\/azure\/cosmos-db\/sql\/sql-api-sdk-java-v4\" target=\"_blank\" rel=\"noopener\">Release notes and additional resources<\/a><\/li>\n<li><a href=\"https:\/\/devblogs.microsoft.com\/cosmosdb\/java-sdk-v4-async-vs-sync\/\" target=\"_blank\" rel=\"noopener\">Exploring the Async API (reactor programming)<\/a><\/li>\n<\/ul>\n<h3 id=\"about-azure-cosmos-db\">About Azure Cosmos DB<i class=\"fabric-icon fabric-icon--Link\" aria-hidden=\"true\"><\/i><\/h3>\n<p>Azure Cosmos DB is a fully managed and serverless distributed database for modern app development, with SLA-backed speed and availability, automatic and instant scalability, and support for open source PostgreSQL, MongoDB and Apache Cassandra. <a href=\"https:\/\/cosmos.azure.com\/try\/\">Try Azure Cosmos DB for free here<\/a>. To stay in the loop on Azure Cosmos DB updates, follow us on <a href=\"https:\/\/twitter.com\/AzureCosmosDB\">Twitter<\/a>, <a href=\"https:\/\/www.youtube.com\/AzureCosmosDB\">YouTube<\/a>, and <a href=\"https:\/\/www.linkedin.com\/company\/azure-cosmos-db\/\">LinkedIn<\/a>.<\/p>\n<p class=\"\">To easily build your first database, watch our\u00a0<a href=\"https:\/\/youtube.com\/playlist?list=PLmamF3YkHLoLLGUtSoxmUkORcWaTyHlXp\" target=\"_blank\" rel=\"noopener\">Get Started videos<\/a> on YouTube and explore ways to <a href=\"https:\/\/docs.microsoft.com\/azure\/cosmos-db\/optimize-dev-test\" target=\"_blank\" rel=\"noopener\">dev\/test free.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We&#8217;re always busy adding new features, fixes, patches, and improvements to our Java-based client libraries for Azure Cosmos DB for NoSQL. In this regular blog series, we share highlights of recent updates in the last period. &nbsp; July &#8211; December 2023 updates &nbsp; Spark 3.4 Support Throughput Control &#8211; gateway support in Spark Connector Aggressive [&hellip;]<\/p>\n","protected":false},"author":9387,"featured_media":5405,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[14,1915,643,1778,1849],"tags":[],"class_list":["post-7466","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-core-sql-api","category-java-ecosystem-updates","category-java-sdk","category-spark","category-spring-data"],"acf":[],"blog_post_summary":"<p>We&#8217;re always busy adding new features, fixes, patches, and improvements to our Java-based client libraries for Azure Cosmos DB for NoSQL. In this regular blog series, we share highlights of recent updates in the last period. &nbsp; July &#8211; December 2023 updates &nbsp; Spark 3.4 Support Throughput Control &#8211; gateway support in Spark Connector Aggressive [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/posts\/7466","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/users\/9387"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/comments?post=7466"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/posts\/7466\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/media\/5405"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/media?parent=7466"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/categories?post=7466"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cosmosdb\/wp-json\/wp\/v2\/tags?post=7466"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}