Azure Developer Java SDK
Azure Developer Java SDK
b GET STARTED
i REFERENCE
c HOW-TO GUIDE
Configure logging
Common concepts
d TRAINING
Asynchronous programming
Long-running operations
Proxying
Tracing
Get started with Azure SDK and Apache
Maven
Article • 02/14/2025
This article shows you how to use Apache Maven to build applications with the Azure
SDK for Java. In this article, you set up a new project with Maven, build projects with
Maven, and use the GraalVM native image tooling to create platform-specific native
binaries.
The Azure SDK for Java project includes a Maven archetype that can accelerate the
bootstrapping of a new project. The Azure SDK for Java Maven archetype creates a new
application, with files and a directory structure that follows best practices. In particular,
the Azure SDK for Java Maven archetype creates a new Maven project with the following
features:
A dependency on the latest azure-sdk-bom BOM release, which ensures that all
dependencies for Azure SDK for Java are aligned, and gives you the best developer
experience possible.
Built-in support for GraalVM native image compilation.
Support for generating a new project with a specified set of Azure SDK for Java
client libraries.
Integration with the Azure SDK for Java build tooling, which gives build-time
analysis of your project to ensure that many best practices are followed.
Prerequisites
Java Developer Kit, version 8 or later. We recommend version 17 for the best
experience.
Apache Maven
Bash
mvn archetype:generate \
-DarchetypeGroupId=com.azure.tools \
-DarchetypeArtifactId=azure-sdk-archetype
After you enter this command, a series of prompts asks for details about your project so
the archetype can generate the right output for you. The following table describes the
properties you need to provide values for:
ノ Expand table
Name Description
groupId (Required) The Maven groupId to use in the POM file created for the generated
project.
artifactId (Required) The Maven artifactId to use in the POM file created for the
generated project.
package (Optional) The package name to put the generated code into. Inferred from the
groupId if it's not specified.
azureLibraries (Optional) A comma-separated list of Azure SDK for Java libraries, using their
Maven artifact IDs. For a list of such artifact IDs, see Azure SDK Releases .
enableGraalVM (Optional) false to indicate that the generated Maven POM file shouldn't
include support for compiling your application to a native image using
GraalVM; otherwise, true . The default value is true .
javaVersion (Optional) The minimum version of the JDK to target when building the
generated project, such as 8 , 11 , or 17 . The default value is the latest LTS
release (currently 17 ). The minimum value is 8 .
junitVersion (Optional) The version of JUnit to include as a dependency. The default value is
5 . Valid values 4 and 5 .
Alternately, you can provide these values when you call the archetype command shown
earlier. This approach is useful, for example, for automation purposes. You can specify
the values as parameters using the standard Maven syntax of appending -D to the
parameter name, for example:
Bash
-DjavaVersion=17
Validation of the correct use of the azure-sdk-for-java BOM, including using the
latest version, and relying on it to define dependency versions on Azure SDK for
Java client libraries. For more information, see the Add Azure SDK for Java to an
existing project section.
Validation that historical Azure client libraries aren't being used when newer and
improved versions exist.
You can configure the build tool in a project Maven POM file as shown in the following
example. Be sure to replace the {latest_version} placeholder with the latest version
listed online .
XML
<build>
<plugins>
<plugin>
<groupId>com.azure.tools</groupId>
<artifactId>azure-sdk-build-tool</artifactId>
<version>{latest_version}</version>
</plugin>
</plugins>
</build>
After adding the build tool into a Maven project, you can run the tool by calling mvn
compile azure:run . Depending on the configuration provided, you can expect to see
build failures or report files generated that can inform you about potential issues before
they become more serious. We recommend that you run this tool as part of your CI/CD
pipeline. As the build tool evolves, we'll publish new releases, and we recommend that
developers frequently check for new releases and update as appropriate.
It's possible to configure the build tool to enable or disable particular features. For this
configuration, add a configuration section in the XML shown previously. Within that
section, configure the settings shown in the following table. Any configuration that isn't
explicitly mentioned takes the default value specified in the table.
ノ Expand table
To use dependency versions for an Azure SDK for Java client library that is in the BOM,
include the following snippet in the project pom.xml file. Replace the
{bom_version_to_target} placeholder with the latest release of the Azure SDK for Java
BOM . Replace the {artifactId} placeholder with the Azure service SDK package
name.
XML
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-sdk-bom</artifactId>
<version>{bom_version_to_target}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>{artifactId}</artifactId>
</dependency>
</dependencies>
You can find all releases of the Azure SDK for Java client BOM at azure-sdk-bom . We
recommend using the latest version to take advantage of the newest features of the
Azure SDK for Java client libraries.
Using Maven to define project dependencies can make managing your projects simpler.
With the Azure SDK BOM and Azure SDK Maven archetype, you can accelerate your
project while being more confident about your dependency versioning over the long
term. We recommend using the BOM to keep dependencies aligned and up to date.
In addition to adding the Azure SDK BOM, we recommend also including the Azure SDK
for Java build tool. This tool helps to diagnose many issues commonly encountered
when building applications, as described previously in this article.
XML
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-messaging-eventhubs</artifactId> <!-- Use the
dependency version that is in the BOM -->
</dependency>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-messaging-servicebus</artifactId>
<version>7.4.0</version> <!-- Override the Service Bus dependency
version specified in the BOM -->
</dependency>
</dependencies>
If you use this approach and specify versions directly in your project, you might get
dependency version conflicts. These conflicts arise because different packages may
depend on different versions of common dependencies, and these versions may not be
compatible with each other. When conflicts occur, you can experience undesirable
behavior at compile time or runtime. We recommend that you rely on versions that are
in the Azure SDK BOM unless necessary. For more information on dealing with
dependencies when using the Azure SDK for Java, see Troubleshoot dependency version
conflicts.
To get started, you need to install GraalVM and prepare your development system for
compiling native images. The installation process for GraalVM is straightforward, and the
GraalVM documentation provides step-by-step instructions for installing GraalVM and
using GraalVM to install native-image . Follow the prerequisites section carefully to
install the necessary native compilers for your operating system.
The Azure SDK for Java Maven archetype can configure your build to support GraalVM
native image compilation, but you can also add it to an existing Maven build. You can
find instructions for Maven on the GraalVM website.
Next, you're ready to run a native image build. You can use standard Maven tooling to
use GraalVM native image. For Maven, use the following command:
shell
After you run this command, GraalVM outputs a native executable for the platform it's
running on. The executable appears in the Maven /target directory of your project. You
can now run your application with this executable file, and it should perform similarly to
a standard Java application.
Next steps
Get started with Azure extensions for IntelliJ and Eclipse
Feedback
Was this page helpful? Yes No
Provide product feedback | Get help at Microsoft Q&A
Get started with Azure SDK and Gradle
Article • 02/14/2025
This article shows you how to use Gradle to build applications with the Azure SDK for
Java. In this article, you set up a new project with Gradle, build projects with Gradle, and
use the GraalVM native image tooling to create platform-specific native binaries.
Prerequisites
Java Developer Kit, version 8 or later. We recommend version 17 for the best
experience.
Gradle
Bash
You're prompted to answer a short series of questions, after which you have a directory
containing a collection of files and subdirectories. To ensure that the generated files
compile, run the following commands required to verify the build:
Bash
You can now move on to editing the build.gradle file located in the app directory. For
starters, to make dependency version management simpler, the Azure SDK for Java
team publishes the Azure SDK for Java client BOM each month. This BOM file includes
all Generally Available (GA) Azure SDK for Java client packages with their compatible
dependency version.
To use dependency versions for an Azure SDK for Java client library that is in the BOM,
include the following snippet in the project build.gradle file. Replace the
{bom_version_to_target} placeholder with the latest release of the Azure SDK for Java
BOM .
groovy
dependencies {
implementation platform('com.azure:azure-sdk-bom:
{bom_version_to_target}')
}
You can find all releases of the Azure SDK for Java client BOM at azure-sdk-bom . We
recommend using the latest version to take advantage of the newest features of the
Azure SDK for Java client libraries.
Once you've started depending on the Azure SDK for Java BOM, you can include
dependencies on libraries without specifying their version. These version values are
looked up automatically in the Azure SDK for Java BOM. For example, to include an
azure-storage-blob dependency, add the following lines to your build.gradle file:
groovy
dependencies {
implementation 'com.azure:azure-storage-blob'
}
Using Gradle to define project dependencies can make managing your projects simpler.
With the Azure SDK BOM, you can accelerate your project while being more confident
about your dependency versioning over the long term. We recommend using the BOM
to keep dependencies aligned and up to date.
groovy
dependencies {
// Use the dependency version that is in the BOM
implementation 'com.azure:azure-messaging-eventhubs'
// Override the Service Bus dependency version specified in the BOM
implementation 'com.azure:azure-messaging-servicebus:7.4.0'
}
If you use this approach and specify versions directly in your project, you might get
dependency version conflicts. These conflicts arise because different packages may
depend on different versions of common dependencies, and these versions may not be
compatible with each other. When conflicts occur, you can experience undesirable
behavior at compile time or runtime. We recommend that you rely on versions that are
in the Azure SDK BOM unless necessary. For more information on dealing with
dependencies when using the Azure SDK for Java, see Troubleshoot dependency version
conflicts.
To get started, you need to install GraalVM and prepare your development system for
compiling native images. The installation process for GraalVM is straightforward, and the
GraalVM documentation provides step-by-step instructions for installing GraalVM and
using GraalVM to install native-image . Follow the prerequisites section carefully to
install the necessary native compilers for your operating system.
With your existing Gradle-based project, you can follow the GraalVM instructions for
Gradle on how to add GraalVM support to your project. In doing so, you then have
more build options, allowing you to compile your application into the standard Java
bytecode, or into a native image compiled by GraalVM.
Next, you're ready to run a native image build. You can use standard Gradle tooling to
use GraalVM native image. For Gradle, use the following command:
Bash
gradle nativeCompile
After you run this command, GraalVM outputs a native executable for the platform it's
running on. The executable appears in the Gradle /app/build/native/nativeCompile
directory of your project. You can now run your application with this executable file, and
it should perform similarly to a standard Java application.
Next steps
Get started with Azure extensions for IntelliJ and Eclipse
Feedback
Was this page helpful? Yes No
This article walks you through setting up a development environment for Azure
development in Java. Microsoft provides IDE extensions for both IntelliJ and Eclipse to
increase productivity when working with the Azure SDK for Java.
2. Select Browse repositories, and then search Azure and install the Azure toolkit for
Intellij.
3. Restart Intellij.
4. Open reference book from Tools -> Azure -> Azure SDK Reference Book
Install the Azure Toolkit for Eclipse
The Azure toolkit is necessary if you plan to deploy web apps or APIs programmatically.
Currently, it isn't used for any other kinds of development. For a quickstart, see Create a
Hello World web app for Azure App Service using Eclipse.
1. Select the Help menu, and then select Install new software.
2. In the Work with box, enter http://dl.microsoft.com/eclipse/ and select Enter.
3. Select the check box next to Azure toolkit for Java. Clear the check box for
Contact all update sites during install to find required software. Then select Next.
Next steps
Create a Hello World web app for Azure App Service using IntelliJ
Create a Hello World web app for Azure App Service using Eclipse
Feedback
Was this page helpful? Yes No
This article provides links to the Java libraries, drivers, Spring modules, and related articles
available for use with Azure.
Microsoft’s goal is to empower every developer to achieve more, and our commitment to Java
developers is no exception. Java and Spring developers want to use idiomatic libraries to
simplify connections to their preferred cloud services. These libraries, drivers, and modules let
you easily interact with Azure services across data, messaging, cache, storage, eventing,
directory, and secrets management. Use the following table to find the right library, driver, or
module and guides to get started.
ノ Expand table
Data SQL SQL Database Use Java and JDBC Spring Data: Use Spring Data
database JDBC driver with Azure SQL • JDBC with Azure SQL
Database • JPA Database:
• R2DBC • JDBC
• JPA
• R2DBC
Data MySQL MySQL JDBC Quickstart: Use Java Spring Data: Use Spring Data
driver and JDBC with • JDBC with Azure
Azure Database for • JPA Database for
MySQL • R2DBC MySQL:
• JDBC
• JPA
• R2DBC
Data PostgreSQL PostgreSQL Quickstart: Use Java Spring Data: Use Spring Data
JDBC driver and JDBC with • JDBC with Azure
Azure Database for • JPA Database for
PostgreSQL Flexible • R2DBC PostgreSQL:
Server • JDBC
• JPA
• R2DBC
Category Azure Java library or Java getting Spring Spring getting
service driver started module started
Data MariaDB MariaDB driver MariaDB drivers Spring Data: Use Spring Data
and management • JDBC with Azure
tools compatible • JPA Database for
with Azure • R2DBC MySQL:
Database for • JDBC
MariaDB • JPA
• R2DBC
Data Azure Maven Quickstart: Build a Spring Data How to use the
Cosmos DB - Repository: Java app to Azure Cosmos Spring Boot
SQL com.azure » manage Azure DB Starter with Azure
azure-cosmos Cosmos DB for Cosmos DB for
NoSQL data NoSQL
Data Azure MongoDB Java Quickstart: Create a Spring Data How to use
Cosmos DB - Drivers console app with MongoDB Spring Data with
MongoDB Java and Azure Azure Cosmos DB
Cosmos DB for for MongoDB
MongoDB
Data Azure Datastax Java Quickstart: Build a Spring Data How to use
Cosmos DB - Driver for Java app to Apache Spring Data with
Cassandra Apache manage Azure Cassandra Azure Cosmos DB
Cassandra Cosmos DB for for Apache
Apache Cassandra Cassandra
data (v4 Driver)
Cache Redis LETTUCE client Best Practices for • Spring Data Configure a
using Azure Cache Redis Spring Boot
for Redis with • Reference Initializer app to
Lettuce • Spring Cloud use Redis in the
Azure Redis cloud with Azure
support Redis Cache
Category Azure Java library or Java getting Spring Spring getting
service driver started module started
Storage Azure Maven Quickstart: Manage Spring Cloud How to use the
Storage Repository: blobs with Java v12 Azure resource Spring Boot
com.azure » SDK handing Starter for Azure
azure-storage- Storage
blob
Messaging Service Bus JMS + AMQP Send messages to • Spring How to use
an Azure Service AMQP Spring Boot
Bus topic and • Spring Cloud Starter for Azure
receive messages Azure JMS Service Bus JMS
from subscriptions support
to the topic
Messaging Service Bus Azure Service Azure Service Bus • Spring How to use
Bus client Samples client AMQP Spring Cloud
library for Java library for Java • Spring Azure Stream
integration Binder for Azure
with Azure Service Bus
Service Bus
• Spring Cloud
Stream Binder
for Azure
Service Bus
Eventing Event Hubs Kafka Send and Receive • Spring for How to use the
Messages in Java Apache Spring Boot
using Azure Event Kafka Starter for
Hubs for Apache • Spring Cloud Apache Kafka
Kafka Ecosystems Azure Kafka with Azure Event
support Hubs
Eventing Event Hubs Azure Event Use Java to send Spring Cloud How to create a
Hubs libraries events to or receive Stream Binder Spring Cloud
for Java events from Azure for Event Hubs Stream Binder
Event Hubs application with
Azure Event Hubs
Directory Azure Active MSAL Enable Java Servlet Azure AD B2C Enable Spring
Directory apps to sign in Spring Boot Boot Web apps
B2C users on Azure AD Starter to sign in users
B2C on Azure AD B2C
Category Azure Java library or Java getting Spring Spring getting
service driver started module started
Secrets Key Vault Key Vault Manage secrets Key Vault Manage secrets
Secrets using Key Vault Secrets Spring for Spring Boot
Boot Starter apps
Next steps
For all other libraries, see Azure SDK for Java libraries.
Use the Azure SDK for Java
Article • 04/02/2025
The open-source Azure SDK for Java simplifies provisioning, managing, and using Azure
resources from Java application code.
Important details
The Azure libraries are how you communicate with Azure services from Java code
that you run either locally or in the cloud.
The libraries support Java 8 and later, and are tested against both the Java 8
baseline and the latest Java 'long-term support' release.
The libraries include full Java module support, which means that they're fully
compliant with the requirements of a Java module and export all relevant packages
for use.
The Azure SDK for Java is composed solely of many individual Java libraries that
relate to specific Azure services. There are no other tools in the "SDK".
There are distinct "management" and "client" libraries (sometimes referred to as
"management plane" and "data plane" libraries). Each set serves different purposes
and is used by different kinds of code. For more information, see the following
sections later in this article:
Connect to and use Azure resources with client libraries.
Provision and manage Azure resources with management libraries.
You can find documentation for the libraries in the Azure for Java Reference
organized by Azure Service, or the Java API browser organized by package name.
Other details
The Azure SDK for Java libraries build on top of the underlying Azure REST API,
allowing you to use those APIs through familiar Java paradigms. However, you can
always use the REST API directly from Java code, if you prefer.
You can find the source code for the Azure libraries in the GitHub repository . As
an open-source project, contributions are welcome!
We're currently updating the Azure SDK for Java libraries to share common cloud
patterns such as authentication protocols, logging, tracing, transport protocols,
buffered responses, and retries.
This shared functionality is contained in the azure-core library.
For more information on the guidelines we apply to the libraries, see the Java
Azure SDK Design Guidelines .
Supported platforms for Azure SDK for Java
The Azure SDK for Java ships with support for Java 8 and later, but we recommend that
developers always use the latest Java long-term support (LTS) release in development
and when releasing to production. Using the latest LTS release ensures the availability of
the latest improvements within Java, including bug fixes, performance improvements,
and security fixes. Also, the Azure SDK for Java includes additional support for later
releases of Java. This additional support improves performance and includes JDK-
specific enhancements beyond the supported Java 8 baseline.
The Azure SDK for Java is tested and supported on Windows, Linux, and macOS. It is not
tested on other platforms that the JDK supports, and does not support Android
deployments. For developers wanting to develop software for deployment on Android
devices and which make use of Azure services, there are Android-specific libraries
available in the Azure SDK for Android project.
All Azure Java client libraries follow the same API design pattern of offering a Java
builder class that's responsible for creating an instance of a client. This pattern separates
the definition and instantiation of the client from its operation, allowing the client to be
immutable and therefore easier to use. Additionally, all client libraries follow a few
important patterns:
Client libraries that support both synchronous and asynchronous APIs must offer
these APIs in separate classes. What this means is that in these cases there would
be, for example, a KeyVaultClient for sync APIs and a KeyVaultAsyncClient for
async APIs.
There's a single builder class that takes responsibility for building both the sync
and async APIs. The builder is named similarly to the sync client class, with Builder
included. For example, KeyVaultClientBuilder . This builder has buildClient() and
buildAsyncClient() methods to create client instances, as appropriate.
Because of these conventions, all classes ending in Client are immutable and provide
operations to interact with an Azure service. All classes that end in ClientBuilder
provide operations to configure and create an instance of a particular client type.
Java
The following code example shows how to create an asynchronous Key Vault
KeyAsyncClient :
Java
For more information on working with each client library, see the README.md file
located in the library's project directory in the SDK GitHub repository . You can also
find more code snippets in the reference documentation and the Azure Samples.
With the management libraries, you can write configuration and deployment scripts to
perform the same tasks that you can through the Azure portal or the Azure CLI.
All Azure Java management libraries provide a *Manager class as service API, for
example, ComputeManager for Azure compute service, or AzureResourceManager for the
aggregation of popular services.
Management libraries example
The following code example shows how to create a ComputeManager :
Java
The following code example shows how to provision a new virtual machine:
Java
.withPopularLinuxImage(KnownLinuxVirtualMachineImage.UBUNTU_SERVER_18_04_LTS
)
.withRootUsername(<virtual-machine username>)
.withSsh(<virtual-machine SSH key>)
.create();
The following code example shows how to get an existing virtual machine:
Java
The following code example shows how to update the virtual machine and add a new
data disk:
Java
virtualMachine.update()
.withNewDataDisk(10)
.apply();
For more information on working with each management library, see the README.md
file located in the library's project directory in the SDK GitHub repository . You can also
find more code snippets in the reference documentation and the Azure Samples.
Next steps
Now that you understand what the Azure SDK for Java is, you can take a deep dive into
many of the cross-cutting concepts that exist to make you productive when using the
libraries. The following articles provide good starting points:
Feedback
Was this page helpful? Yes No
This article provides an overview of the Azure Identity library for Java, which provides
Microsoft Entra token authentication support across the Azure SDK for Java. This library
provides a set of TokenCredential implementations that you can use to construct Azure
SDK clients that support Microsoft Entra token authentication.
Follow these links to learn more about the specifics of each of these authentication
approaches. In the rest of this article, we introduce the commonly used
DefaultAzureCredential and related subjects.
XML
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-sdk-bom</artifactId>
<version>{bom_version_to_target}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Then include the direct dependency in the dependencies section without the version tag:
XML
<dependencies>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-identity</artifactId>
</dependency>
</dependencies>
Key concepts
There are two key concepts in understanding the Azure Identity library: the concept of a
credential, and the most common implementation of that credential,
DefaultAzureCredential .
A credential is a class that contains or can obtain the data needed for a service client to
authenticate requests. Service clients across the Azure SDK accept credentials when
they're constructed, and service clients use those credentials to authenticate requests to
the service.
The Azure Identity library focuses on OAuth authentication with Microsoft Entra ID, and
it offers various credential classes that can acquire a Microsoft Entra token to
authenticate service requests. All of the credential classes in this library are
implementations of the TokenCredential abstract class in azure-core , and you can use
any of them to construct service clients that can authenticate with a TokenCredential .
DefaultAzureCredential is appropriate for most scenarios where the application is
Examples
As noted in Use the Azure SDK for Java, the management libraries differ slightly. One of
the ways they differ is that there are libraries for consuming Azure services, called client
libraries, and libraries for managing Azure services, called management libraries. In the
following sections, there's a quick overview of authenticating in both client and
management libraries.
Java
You can find the subscription IDs on the Subscriptions page in the Azure portal .
Alternatively, use the following Azure CLI command to get subscription IDs:
Azure CLI
Java
AzureResourceManager azureResourceManager =
AzureResourceManager.authenticate(
new DefaultAzureCredentialBuilder().build(),
new AzureProfile(AzureEnvironment.AZURE))
.withDefaultSubscription();
instance using DefaultAzureCredential . You can also use other Token Credential
implementations offered in the Azure Identity library in place of
DefaultAzureCredential .
Troubleshooting
For guidance, see Troubleshoot Azure Identity authentication issues.
Next steps
This article introduced the Azure Identity functionality available in the Azure SDK for
Java. It described DefaultAzureCredential as common and appropriate in many cases.
The following articles describe other ways to authenticate using the Azure Identity
library, and provide more information about DefaultAzureCredential :
Feedback
Was this page helpful? Yes No
This article looks at how the Azure Identity library supports Microsoft Entra token
authentication for applications hosted on Azure. This support is made possible through
a set of TokenCredential implementations, which are discussed in this article.
DefaultAzureCredential
ManagedIdentityCredential
DefaultAzureCredential
DefaultAzureCredential combines credentials that are commonly used to authenticate
Configure DefaultAzureCredential
DefaultAzureCredential supports a set of configurations through setters on the
values.
Setting .managedIdentityClientId(String) on the builder or the environment
variable AZURE_CLIENT_ID configures DefaultAzureCredential to authenticate as a
user-assigned managed identity, while leaving them empty configures it to
authenticate as a system-assigned managed identity.
Setting .tenantId(String) on the builder or the environment variable
AZURE_TENANT_ID configures DefaultAzureCredential to authenticate to a specific
token cache.
Java
Java
/**
* DefaultAzureCredential uses the user-assigned managed identity with the
specified client ID.
*/
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
.managedIdentityClientId("<CLIENT_ID>")
.build();
ManagedIdentityCredential
ManagedIdentityCredential authenticates the managed identity (system-assigned or
user-assigned) of an Azure resource. So, if the application is running inside an Azure
resource that supports managed identity through IDENTITY/MSI , IMDS endpoints, or
both, then this credential gets your application authenticated, and offers a secretless
authentication experience.
For more information, see What are managed identities for Azure resources?.
Java
/**
* Authenticate with a user-assigned managed identity.
*/
ManagedIdentityCredential credential = new
ManagedIdentityCredentialBuilder()
.clientId("<CLIENT_ID>") // required only for user-assigned
.build();
Environment variables
You can configure DefaultAzureCredential and EnvironmentCredential with
environment variables. Each type of authentication requires values for specific variables:
ノ Expand table
ノ Expand table
ノ Expand table
Configuration is attempted in this order. For example, if values for a client secret and
certificate are both present, the client secret is used.
Next steps
This article covered authentication for applications hosted in Azure. This form of
authentication is one of multiple ways you can authenticate in the Azure SDK for Java.
The following articles describe other ways:
Azure authentication in development environments
Authentication with service principals
Authentication with user credentials
After you've mastered authentication, see Configure logging in the Azure SDK for Java
for information on the logging functionality provided by the SDK.
Feedback
Was this page helpful? Yes No
This article provides an overview of the Azure Identity library support for Microsoft Entra
token authentication. This support enables authentication for applications running
locally on developer machines through a set of TokenCredential implementations.
For more information, see Microsoft identity platform and the OAuth 2.0 device
authorization grant flow.
1. Go to Microsoft Entra ID in the Azure portal and find your app registration.
2. Navigate to the Authentication section.
3. Under Suggested Redirected URIs, check the URI that ends with
/common/oauth2/nativeclient .
4. Under Default Client Type, select yes for Treat application as a public client.
These steps enable the application to authenticate, but it still doesn't have permission to
sign you into Microsoft Entra ID, or access resources on your behalf. To address this
issue, navigate to API Permissions, and enable Microsoft Graph and the resources you
want to access.
You must also be the admin of your tenant to grant consent to your application when
you sign in for the first time.
If you can't configure the device code flow option on your Microsoft Entra ID, then it
may require your app to be multi- tenant. To make your app multi-tenant, navigate to
the Authentication panel, then select Accounts in any organizational directory. Then,
select yes for Treat application as Public Client.
Java
Java
Azure CLI
az login
Sign in as a service principal using the following command:
Azure CLI
az login \
--service-principal \
--username <client-ID> \
--password <client-secret> \
--tenant <tenant-ID>
If the account or service principal has access to multiple tenants, make sure the desired
tenant or subscription is in the state "Enabled" in the output from the following
command:
Azure CLI
az account list
Before you use AzureCliCredential in code, run the following command to verify that
the account has been successfully configured.
Azure CLI
az account get-access-token
You may need to repeat this process after a certain time period, depending on the
refresh token validity in your organization. Generally, the refresh token validity period is
a few weeks to a few months. AzureCliCredential prompts you to sign in again.
Java
Java
Next steps
This article covered authentication during development using credentials available on
your computer. This form of authentication is one of multiple ways you can authenticate
in the Azure SDK for Java. The following articles describe other ways:
Authenticating applications hosted in Azure
Authentication with service principals
Authentication with user credentials
After you've mastered authentication, see Configure logging in the Azure SDK for Java
for information on the logging functionality provided by the SDK.
Feedback
Was this page helpful? Yes No
This article looks at how the Azure Identity library supports Microsoft Entra token
authentication via service principal. This article covers the following subjects:
For more information, see Application and service principal objects in Microsoft Entra ID.
For troubleshooting service principal authentication issues, see Troubleshoot service
principal authentication.
Use the following command to create a service principal and configure its access to
Azure resources:
Azure CLI
az ad sp create-for-rbac \
--name <your application name> \
--role Contributor \
--scopes /subscriptions/mySubscriptionID
Output
{
"appId": "generated-app-ID",
"displayName": "dummy-app-name",
"name": "http://dummy-app-name",
"password": "random-password",
"tenant": "tenant-ID"
}
Use the following command to create a service principal along with a certificate. Note
down the path/location of this certificate.
Azure CLI
az ad sp create-for-rbac \
--name <your application name> \
--role Contributor \
--cert <certificate name> \
--create-cert
Check the returned credentials and to note down the following information:
Java
/**
* Authenticate with client secret.
*/
ClientSecretCredential clientSecretCredential = new
ClientSecretCredentialBuilder()
.clientId("<your client ID>")
.clientSecret("<your client secret>")
.tenantId("<your tenant ID>")
.build();
Java
/**
* Authenticate with a client certificate.
*/
ClientCertificateCredential clientCertificateCredential = new
ClientCertificateCredentialBuilder()
.clientId("<your client ID>")
.pemCertificate("<path to PEM certificate>")
// Choose between either a PEM certificate or a PFX certificate.
//.pfxCertificate("<path to PFX certificate>")
//.clientCertificatePassword("PFX CERTIFICATE PASSWORD")
.tenantId("<your tenant ID>")
.build();
Next steps
This article covered authentication via service principal. This form of authentication is
one of multiple ways you can authenticate in the Azure SDK for Java. The following
articles describe other ways:
If you run into issues related to service principal authentication, see Troubleshoot service
principal authentication.
After you've mastered authentication, see Configure logging in the Azure SDK for Java
for information on the logging functionality provided by the SDK.
Feedback
Was this page helpful? Yes No
This article looks at how the Azure Identity library supports Microsoft Entra token
authentication with user-provided credentials. This support is made possible through a
set of TokenCredential implementations discussed in this article.
For more information, see Microsoft identity platform and the OAuth 2.0 device
authorization grant flow.
4. Under Default Client Type, select yes for Treat application as a public client .
These steps enable the application to authenticate, but it still doesn't have permission to
sign you into Microsoft Entra ID, or access resources on your behalf. To address this
issue, navigate to API Permissions, and enable Microsoft Graph and the resources you
want to access, such as Key Vault.
You also need to be the admin of your tenant to grant consent to your application when
you sign in for the first time.
If you can't configure the device code flow option on your Microsoft Entra ID, then it
may require your app to be multi- tenant. To make your app multi-tenant, navigate to
the Authentication panel, then select Accounts in any organizational directory. Then,
select yes for Treat application as Public Client.
Java
/**
* Authenticate with device code credential.
*/
DeviceCodeCredential deviceCodeCredential = new
DeviceCodeCredentialBuilder()
.challengeConsumer(challenge -> {
// Lets the user know about the challenge.
System.out.println(challenge.getMessage());
}).build();
Java
/**
* Authenticate interactively in the browser.
*/
InteractiveBrowserCredential interactiveBrowserCredential = new
InteractiveBrowserCredentialBuilder()
.clientId("<your app client ID>")
.redirectUrl("YOUR_APP_REGISTERED_REDIRECT_URL")
.build();
Next steps
This article covered authentication with user credentials. This form of authentication is
one of multiple ways you can authenticate in the Azure SDK for Java. The following
articles describe other ways:
After you've mastered authentication, see Configure logging in the Azure SDK for Java
for information on the logging functionality provided by the SDK.
Feedback
Was this page helpful? Yes No
Provide product feedback | Get help at Microsoft Q&A
Credential chains in the Azure Identity
library for Java
06/03/2025
The Azure Identity library provides credentials—public classes that implement the Azure Core
library's TokenCredential interface. A credential represents a distinct authentication flow for
acquiring an access token from Microsoft Entra ID. These credentials can be chained together
to form an ordered sequence of authentication mechanisms to be attempted.
getToken
2
loop [Traverse
TokenCredential
collection until
AccessToken received]
Fetch token
3
getToken
4
Result
5
AccessToken
6
Java
import com.azure.core.credential.TokenCredential;
import com.azure.identity.AzureCliCredentialBuilder;
import com.azure.identity.ManagedIdentityCredentialBuilder;
Seamless transitions: Your app can move from local development to your staging or
production environment without changing authentication code.
Improved resiliency: Includes a fallback mechanism that moves to the next credential
when the prior fails to acquire an access token.
CREDENTIALS
Environment Workload Identity Managed Identity Shared Token Cache IntelliJ Azure CLI Azure PowerShell Azure Developer CLI
CREDENTIAL TYPES
ノ Expand table
2 Workload If the app is deployed to an Azure host with Workload Identity enabled,
Identity authenticate that account.
3 Managed If the app is deployed to an Azure host with Managed Identity enabled,
Identity authenticate the app to Azure using that Managed Identity.
4 Shared Token If the developer authenticated to Azure by logging into Visual Studio,
Cache authenticate the app to Azure using that same account. (Windows only.)
5 IntelliJ If the developer authenticated via Azure Toolkit for IntelliJ, authenticate that
account.
6 Azure CLI If the developer authenticated to Azure using Azure CLI's az login command,
authenticate the app to Azure using that same account.
8 Azure If the developer authenticated to Azure using Azure Developer CLI's azd auth
Developer CLI login command, authenticate with that account.
In its simplest form, you can use the parameterless version of DefaultAzureCredential as
follows:
Java
import com.azure.identity.DefaultAzureCredential;
import com.azure.identity.DefaultAzureCredentialBuilder;
Shared Token Cache IntelliJ Azure CLI Azure PowerShell Azure Developer CLI
) Important
ChainedTokenCredential overview
ChainedTokenCredential is an empty chain to which you add credentials to suit your app's
needs. For example:
Java
import com.azure.identity.AzureCliCredential;
import com.azure.identity.AzureCliCredentialBuilder;
import com.azure.identity.ChainedTokenCredential;
import com.azure.identity.ChainedTokenCredentialBuilder;
import com.azure.identity.IntelliJCredential;
import com.azure.identity.IntelliJCredentialBuilder;
// Code omitted for brevity
The preceding code sample creates a tailored credential chain comprised of two development-
time credentials. AzureCliCredential is attempted first, followed by IntelliJCredential , if
necessary. In graphical form, the chain looks like this:
Tip
library, but with that convenience comes tradeoffs. Once you deploy your app to Azure, you
should understand the app's authentication requirements. For that reason, replace
DefaultAzureCredential with a specific TokenCredential implementation, such as
ManagedIdentityCredential .
Here's why:
Output
This article provides an overview of how to enable logging in applications that make use
of the Azure SDK for Java. The Azure client libraries for Java have two logging options:
We recommend that you use SLF4J because it's well known in the Java ecosystem and
it's well documented. For more information, see the SLF4J user manual .
This article links to other articles that cover many of the popular Java logging
frameworks. These other articles provide configuration examples, and describe how the
Azure client libraries can use the logging frameworks.
Whatever logging configuration you use, the same log output is available in either case
because all logging output in the Azure client libraries for Java is routed through an
azure-core ClientLogger abstraction.
The rest of this article details the configuration of all available logging options.
If you use OpenTelemetry, consider using distributed tracing instead of logging for HTTP
requests. For more information, see Configure tracing in the Azure SDK for Java.
response code, and the content length for request and response bodies.
HEADERS : HTTP logs include all the basic details and also include headers that are
known to be safe for logging purposes - that is, they don't contain secrets or
sensitive information. The full list of header names is available in the
HttpLogOptions class.
BODY_AND_HEADERS : HTTP logs include all the details provided by the HEADERS level
and also include request and response bodies as long as they're smaller than 16 KB
and printable.
7 Note
The request URL is sanitized - that is, all query parameter values are redacted
except for the api-version value. Individual client libraries may add other query
parameters that are known to be safe to the allowlist.
For example, the Azure Blob Storage shared access signature (SAS) URL is logged in the
following format: https://myaccount.blob.core.windows.net/pictures/profile.jpg?
sv=REDACTED&st=REDACTED&se=REDACTED&sr=REDACTED&sp=REDACTED&rscd=REDACTED&rsct=REDA
CTED&sig=REDACTED
2 Warning
Java
This code enables HTTP logs with headers and adds the Accept-Ranges response header
and the label query parameter to the corresponding allowlists. After this change, these
values should appear in the produced logs.
For the full list of configuration options, see the HttpLogOptions documentation.
ノ Expand table
After the environment variable is set, restart the application to enable the environment
variable to take effect. This logger logs to the console, and doesn't provide the
advanced customization capabilities of an SLF4J implementation, such as rollover and
logging to file. To turn the logging off again, just remove the environment variable and
restart the application.
SLF4J logging
By default, you should configure logging using an SLF4J-supported logging framework.
First, include a relevant SLF4J logging implementation as a dependency from your
project. For more information, see Declaring project dependencies for logging in the
SLF4J user manual. Next, configure your logger to work as necessary in your
environment, such as setting log levels, configuring which classes do and don't log, and
so on. Some examples are provided through the links in this article, but for more
information, see the documentation for your chosen logging framework.
Log format
Logging frameworks support custom log message formatting and layouts. We
recommend including at least following fields to make it possible to troubleshoot Azure
client libraries:
For examples, see the documentation for the logging framework you use.
Structured logging
In addition to logging the common properties mentioned earlier, Azure client libraries
annotate log messages with extra context when applicable. For example, you might see
JSON-formatted logs containing az.sdk.message with context written as other root
properties, as shown in the following example:
log
When you send logs to Azure Monitor, you can use the Kusto query language to parse
them. The following query provides an example:
Kusto
traces
| where message startswith "{\"az.sdk.message"
| project timestamp, logger=customDimensions["LoggerName"],
level=customDimensions["LoggingLevel"],
thread=customDimensions["ThreadName"], azSdkContext=parse_json(message)
| evaluate bag_unpack(azSdkContext)
7 Note
Azure client library logs are intended for ad-hoc debugging. We don't recommend
relying on the log format to alert or monitor your application. Azure client libraries
do not guarantee the stability of log messages or context keys. For such purposes,
we recommend using distributed tracing. The Application Insights Java agent
provides stability guarantees for request and dependency telemetry. For more
information, see Configure tracing in the Azure SDK for Java.
Next steps
Now that you know how logging works in the Azure SDK for Java, consider reviewing
the following articles. These articles provide guidance on how to configure some of the
more popular Java logging frameworks to work with SLF4J and the Java client libraries:
java.util.logging
Logback
Log4J
Tracing
Feedback
Was this page helpful? Yes No
For more information related to configuring your logger, see Configuring Logging
Output in the Oracle documentation.
XML
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<version>1.7.30</version> <!-- replace this version with the latest
available version on Maven central -->
</dependency>
Console logging
You can create a configuration to log to the console as shown in the following example.
This example is configured to log all logging events that are INFO level or higher,
wherever they come from.
properties
handlers = java.util.logging.ConsoleHandler
.level = INFO
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter =
java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=[%1$tF %1$tH:%1$tM:%1$tS.%1$tL]
[%4$s] %3$s %5$s %n
Log to a file
The previous example logs to the console, which isn't normally the preferred location for
logs. To configure logging to a file instead, use the following configuration:
properties
handlers = java.util.logging.FileHandler
.level = INFO
java.util.logging.FileHandler.pattern = %h/myapplication.log
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.level = INFO
This code will create a file called myapplication.log in your home directory ( %h ). This
logger doesn't support automatic file rotation after a certain period. If you require this
functionality, you'll need to write a scheduler to manage log file rotation.
Next steps
This article covered the configuration of java.util.logging and how to make the Azure
SDK for Java use it for logging. Because the Azure SDK for Java works with all SLF4J
logging frameworks, consider reviewing the SLF4J user manual for further details.
After you've mastered logging, consider looking into the integrations that Azure offers
into frameworks such as Spring and MicroProfile.
Feedback
Was this page helpful? Yes No
This article provides an overview of how to add logging using Logback to applications
that use the Azure SDK for Java. As mentioned in Configure logging in the Azure SDK for
Java, all Azure client libraries log through SLF4J , so you can use logging frameworks
such as Logback .
XML
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
Console logging
You can create a Logback configuration to log to the console as shown in the following
example. This example is configured to log all logging events that are INFO level or
higher, wherever they come from.
XML
<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
It's possible to have fine-grained control over the logging of specific classes, or specific
packages. As shown here, com.azure.core controls the output of all core classes, but you
could equally use com.azure.security.keyvault or equivalent to control the output as
appropriate for the circumstances that are most informative in the context of the
running application.
XML
<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
XML
<rollingPolicy
class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- rollover hourly and gzip logs -->
<fileNamePattern>${LOGS}/archived/spring-boot-logger-%d{yyyy-MM-dd-
HH}.log.gz</fileNamePattern>
</rollingPolicy>
</appender>
Spring applications
The Spring framework works by reading the Spring application.properties file for
various configurations, including the logging configuration. It's possible to configure the
Spring application to read Logback configurations from any file, however. To do so,
configure the logging.config property to point to the logback.xml configuration file by
adding the following line into your Spring /src/main/resources/application.properties
file:
properties
logging.config=classpath:logback.xml
Next steps
This article covered the configuration of Logback and how to make the Azure SDK for
Java use it for logging. Because the Azure SDK for Java works with all SLF4J logging
frameworks, consider reviewing the SLF4J user manual for further details. If you use
Logback, there's also a vast amount of configuration guidance on its website. For more
information, see Logback configuration in the Logback documentation.
After you've mastered logging, consider looking into the integrations that Azure offers
into frameworks such as Spring and MicroProfile.
Feedback
Was this page helpful? Yes No
This article provides an overview of how to add logging using Log4j to applications that use
the Azure SDK for Java. As mentioned in Configure logging in the Azure SDK for Java, all Azure
client libraries log through SLF4J , so you can use logging frameworks such as log4j .
This article provides guidance to use the Log4J 2.x releases, but Log4J 1.x is equally supported
by the Azure SDK for Java. To enable log4j logging, you must do two things:
XML
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.16.0</version>
</dependency>
7 Note
Due to known vulnerability CVE-2021-44228 , be sure to use Log4j version 2.16 or later
Configuring Log4j
There are two common ways to configure Log4j: through an external properties file, or through
an external XML file. These approaches are outlined below.
properties
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d %5p [%t] %c{3} - %m%n
logger.app.name = com.azure.core
logger.app.level = ERROR
rootLogger.level = info
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
XML
Next steps
This article covered the configuration of Log4j and how to make the Azure SDK for Java use it
for logging. Because the Azure SDK for Java works with all SLF4J logging frameworks, consider
reviewing the SLF4J user manual for further details. If you use Log4j, there's also vast amount
of configuration guidance on its website. For more information, see Welcome to Log4j 2!
After you've mastered logging, consider looking into the integrations that Azure offers into
frameworks such as Spring.
HTTP clients and pipelines in the Azure
SDK for Java
Article • 04/01/2025
This article provides an overview of using the HTTP client and pipeline functionality
within the Azure SDK for Java. This functionality provides a consistent, powerful, and
flexible experience for developers using all Azure SDK for Java libraries.
HTTP clients
The Azure SDK for Java is implemented using an HttpClient abstraction. This
abstraction enables a pluggable architecture that accepts multiple HTTP client libraries
or custom implementations. However, to simplify dependency management for most
users, all Azure client libraries depend on azure-core-http-netty . As such, the Netty
HTTP client is the default client used in all Azure SDK for Java libraries.
Although Netty is the default HTTP client, the SDK provides three client
implementations, depending on which dependencies you already have in your project.
These implementations are for:
Netty
OkHttp
HttpClient introduced in JDK 11
7 Note
The JDK HttpClient in combination with the Azure SDK for Java is only supported
with JDK 12 and higher.
The following example shows you how to exclude the Netty dependency from a real
dependency on the azure-security-keyvault-secrets library. Be sure to exclude Netty
from all appropriate com.azure libraries, as shown here:
XML
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-security-keyvault-secrets</artifactId>
<version>4.2.2.</version>
<exclusions>
<exclusion>
<groupId>com.azure</groupId>
<artifactId>azure-core-http-netty</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-core-http-okhttp</artifactId>
<version>1.3.3</version>
</dependency>
7 Note
If you remove the Netty dependency but provide no implementation in its place,
the application fails to start. An HttpClient implementation must exist on the
classpath.
The following examples show how to build HttpClient instances using Netty, OkHttp,
and the JDK 11 HTTP client. These instances proxy through http://localhost:3128 and
authenticate with user example with password weakPassword .
Java
// Netty
HttpClient httpClient = new NettyAsyncHttpClientBuilder()
.proxy(new ProxyOptions(ProxyOptions.Type.HTTP, new
InetSocketAddress("localhost", 3128))
.setCredentials("example", "weakPassword"))
.build();
// OkHttp
HttpClient httpClient = new OkHttpAsyncHttpClientBuilder()
.proxy(new ProxyOptions(ProxyOptions.Type.HTTP, new
InetSocketAddress("localhost", 3128))
.setCredentials("example", "weakPassword"))
.build();
// JDK 11 HttpClient
HttpClient client = new JdkAsyncHttpClientBuilder()
.proxy(new ProxyOptions(ProxyOptions.Type.HTTP, new
InetSocketAddress("localhost", 3128))
.setCredentials("example", "weakPassword"))
.build();
You can now pass the constructed HttpClient instance into a service client builder for
use as the client for communicating with the service. The following example uses the
new HttpClient instance to build an Azure Storage Blob client.
Java
For management libraries, you can set the HttpClient during Manager configuration.
Java
HTTP pipeline
The HTTP pipeline is one of the key components in achieving consistency and
diagnosability in the Java client libraries for Azure. An HTTP pipeline is composed of:
An HTTP transport
HTTP pipeline policies
You can provide your own custom HTTP pipeline when creating a client. If you don't
provide a pipeline, the client library creates one configured to work with that specific
client library.
HTTP transport
The HTTP transport is responsible for establishing the connection to the server, and
sending and receiving HTTP messages. The HTTP transport forms the gateway for the
Azure SDK client libraries to interact with Azure services. As noted earlier in this article,
the Azure SDK for Java uses Netty by default for its HTTP transport. However, the SDK
also provides a pluggable HTTP transport so you can use other implementations where
appropriate. The SDK also provides two more HTTP transport implementations for
OkHttp and the HTTP client that ships with JDK 11 and later.
The Azure Core framework provides the policy with the necessary request and response
data along with any necessary context to execute the policy. The policy can then
perform its operation with the given data and pass the control along to the next policy
in the pipeline.
HTTP pipeline policy position
When you make HTTP requests to cloud services, it's important to handle transient
failures and to retry failed attempts. Because this functionality is a common requirement,
Azure Core provides a retry policy that can watch for transient failures and automatically
retry the request.
This retry policy, therefore, splits the whole pipeline into two parts: policies that execute
before the retry policy and policies that execute after the retry policy. Policies added
before the retry policy execute only once per API operation, and policies added after the
retry policy execute as many times as the retries.
So, when building the HTTP pipeline, you should understand whether to execute a
policy for each request retry or once per API operation.
ノ Expand table
To create a custom HTTP pipeline policy, you just extend a base policy type and
implement some abstract method. You can then plug the policy into the pipeline.
Java
// The three headers are now be added to the outgoing HTTP request.
XML
<project>
...
<dependencies>
...
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative-boringssl-static</artifactId>
<version>2.0.25.Final</version>
<classifier>${os.detected.classifier}</classifier>
</dependency>
...
</dependencies>
...
<build>
...
<extensions>
<extension>
<groupId>kr.motd.maven</groupId>
<artifactId>os-maven-plugin</artifactId>
<version>1.4.0.Final</version>
</extension>
</extensions>
...
</build>
...
</project>
XML
<project>
...
<dependencies>
...
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-core-http-netty</artifactId>
<version>1.13.6</version>
<exclusions>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative-boringssl-static</artifactId>
</exclusion>
</exclusions>
</dependency>
...
</dependencies>
...
</project>
Next steps
Now that you're familiar with HTTP client functionality in the Azure SDK for Java, learn
how to further customize the HTTP client you're using. For more information, see
Configure proxies in the Azure SDK for Java.
Feedback
Was this page helpful? Yes No
This article describes the asynchronous programming model in the Azure SDK for Java.
The Azure SDK initially contained only non-blocking, asynchronous APIs for interacting
with Azure services. These APIs let you use the Azure SDK to build scalable applications
that use system resources efficiently. However, the Azure SDK for Java also contains
synchronous clients to cater to a wider audience, and also make our client libraries
approachable for users not familiar with asynchronous programming. (See
Approachable in the Azure SDK design guidelines.) As such, all Java client libraries in
the Azure SDK for Java offer both asynchronous and synchronous clients. However, we
recommend using the asynchronous clients for production systems to maximize the use
of system resources.
Reactive streams
If you look at the Async Service Clients section in the Java Azure SDK Design
Guidelines , you'll notice that, instead of using CompletableFuture provided by Java 8,
our async APIs use reactive types. Why did we choose reactive types over types that are
natively available in JDK?
operations. Lambdas make these push-based APIs more readable. Streams provide
functional-style operations to handle a collection of data elements. However, streams
are synchronous and can't be reused. CompletableFuture allows you to make a single
request, provides support for a callback, and expects a single response. However, many
cloud services require the ability to stream data - Event Hubs for instance.
Reactive streams can help to overcome these limitations by streaming elements from a
source to a subscriber. When a subscriber requests data from a source, the source sends
any number of results back. It doesn't need to send them all at once. The transfer
happens over a period of time, whenever the source has data to send.
In this model, the subscriber registers event handlers to process data when it arrives.
These push-based interactions notify the subscriber through distinct signals:
Unlike Java Streams, reactive streams treat errors as first-class events. Reactive streams
have a dedicated channel for the source to communicate any errors to the subscriber.
Also, reactive streams allow the subscriber to negotiate the data transfer rate to
transform these streams into a push-pull model.
The Reactive Streams specification provides a standard for how the transfer of data
should occur. At a high level, the specification defines the following four interfaces and
specifies rules on how these interfaces should be implemented.
There are some well-known Java libraries that provide implementations of this
specification, such as RxJava , Akka Streams , Vert.x , and Project Reactor .
The Azure SDK for Java adopted Project Reactor to offer its async APIs. The main factor
driving this decision was to provide smooth integration with Spring Webflux , which
also uses Project Reactor. Another contributing factor to choose Project Reactor over
RxJava was that Project Reactor uses Java 8 but RxJava, at the time, was still at Java 7.
Project Reactor also offers a rich set of operators that are composable and allow you to
write declarative code for building data processing pipelines. Another nice thing about
Project Reactor is that it has adapters for converting Project Reactor types to other
popular implementation types.
For the sake of completeness, it's worth mentioning that Java 9 introduced the Flow
class that includes the four reactive streams interfaces. However, this class doesn't
include any implementation.
The following example shows how to subscribe to the Mono and print the configuration
value to the console.
Java
System.out.println("Done");
Notice that after calling getConfigurationSetting() on the client, the example code
subscribes to the result and provides three separate lambdas. The first lambda
consumes data received from the service, which is triggered upon successful response.
The second lambda is triggered if there is an error while retrieving the configuration.
The third lambda is invoked when the data stream is complete, meaning no more data
elements are expected from this stream.
7 Note
complete.
As shown in the following example, APIs that return a Flux also follow a similar pattern.
The difference is that the first callback provided to the subscribe() method is called
multiple times for each data element in the response. The error or the completion
callbacks are called exactly once and are considered as terminal signals. No other
callbacks are invoked if either of these signals are received from the publisher.
Java
asyncClient.receive().subscribe(
event -> System.out.println("Sequence number of received event: " +
event.getData().getSequenceNumber()),
ex -> System.out.println("Error receiving events: " + ex.getMessage()),
() -> System.out.println("Successfully completed receiving all
events"));
Backpressure
What happens when the source is producing the data at a faster rate than the subscriber
can handle? The subscriber can get overwhelmed with data, which can lead to out-of-
memory errors. The subscriber needs a way to communicate back to the publisher to
slow down when it can't keep up. By default, when you call subscribe() on a Flux as
shown in the example above, the subscriber is requesting an unbounded stream of data,
indicating to the publisher to send the data as quickly as possible. This behavior isn't
always desirable, and the subscriber may have to control the rate of publishing through
"backpressure". Backpressure allows the subscriber to take control of the flow of data
elements. A subscriber will request a limited number of data elements that they can
handle. After the subscriber has completed processing these elements, the subscriber
can request more. By using backpressure, you can transform a push-model for data
transfer into a push-pull model.
The following example shows how you can control the rate at which events are received
by the Event Hubs consumer:
Java
@Override
public void onSubscribe(Subscription subscription) {
this.subscription = subscription;
this.subscription.request(1); // request 1 data element to begin
with
}
@Override
public void onNext(PartitionEvent partitionEvent) {
System.out.println("Sequence number of received event: " +
partitionEvent.getData().getSequenceNumber());
this.subscription.request(1); // request another event when the
subscriber is ready
}
@Override
public void onError(Throwable throwable) {
System.out.println("Error receiving events: " +
throwable.getMessage());
}
@Override
public void onComplete() {
System.out.println("Successfully completed receiving all events")
}
});
When the subscriber first "connects" to the publisher, the publisher hands the subscriber
a Subscription instance, which manages the state of the data transfer. This
Subscription is the medium through which the subscriber can apply backpressure by
calling request() to specify how many more data elements it can handle.
If the subscriber requests more than one data element each time it calls onNext() ,
request(10) for example, the publisher will send the next 10 elements immediately if
they're available or when they become available. These elements accumulate in a buffer
on the subscriber's end, and since each onNext() call will request 10 more, the backlog
keeps growing until either the publisher has no more data elements to send, or the
subscriber's buffer overflows, resulting in out-of-memory errors.
Cancel a subscription
A subscription manages the state of data transfer between a publisher and a subscriber.
The subscription is active until the publisher has completed transferring all the data to
the subscriber or the subscriber is no longer interested in receiving data. There are a
couple of ways you can cancel a subscription as shown below.
The following example cancels the subscription by disposing the subscriber:
Java
// much later on in your code, when you are ready to cancel the
subscription,
// you can call the dispose method, as such:
disposable.dispose();
The follow example cancels the subscription by calling the cancel() method on
Subscription :
Java
asyncClient.receive().subscribe(new Subscriber<PartitionEvent>() {
private Subscription subscription;
@Override
public void onSubscribe(Subscription subscription) {
this.subscription = subscription;
this.subscription.request(1); // request 1 data element to begin
with
}
@Override
public void onNext(PartitionEvent partitionEvent) {
System.out.println("Sequence number of received event: " +
partitionEvent.getData().getSequenceNumber());
this.subscription.cancel(); // Cancels the subscription. No further
event is received.
}
@Override
public void onError(Throwable throwable) {
System.out.println("Error receiving events: " +
throwable.getMessage());
}
@Override
public void onComplete() {
System.out.println("Successfully completed receiving all events")
}
});
Conclusion
Threads are expensive resources that you shouldn't waste on waiting for responses from
remote service calls. As the adoption of microservices architectures increase, the need to
scale and use resources efficiently becomes vital. Asynchronous APIs are favorable when
there are network-bound operations. The Azure SDK for Java offers a rich set of APIs for
async operations to help maximize your system resources. We highly encourage you to
try out our async clients.
For more information on the operators that best suit your particular tasks, see Which
operator do I need? in the Reactor 3 Reference Guide .
Next steps
Now that you better understand the various asynchronous programming concepts, it's
important to learn how to iterate over the results. For more information on the best
iteration strategies, and details of how pagination works, see Pagination and iteration in
the Azure SDK for Java.
Feedback
Was this page helpful? Yes No
This article provides an overview of how to use the Azure SDK for Java pagination and
iteration functionality to work efficiently and productively with large data sets.
Many operations provided by the client libraries within the Azure Java SDK return more
than one result. The Azure Java SDK defines a set of acceptable return types in these
cases to ensure that developer experience is maximized through consistency. The return
types used are PagedIterable for sync APIs and PagedFlux for async APIs. The APIs
differ slightly on account of their different use cases, but conceptually they have the
same requirements:
Make it possible to easily iterate over each element in the collection individually,
ignoring any need for manual pagination or tracking of continuation tokens. Both
PagedIterable and PagedFlux make this task easy by iterating over a paginated
This article is split between the Java Azure SDK synchronous and asynchronous APIs.
You'll see the synchronous iteration APIs when you work with synchronous clients, and
asynchronous iteration APIs when you work with asynchronous clients.
Java
Use Stream
Because PagedIterable has a stream() method defined on it, you can call it to use the
standard Java Stream APIs, as shown in the following example:
Java
client.listSecrets()
.stream()
.forEach(secret -> System.out.println("Secret is: " + secret));
Use Iterator
Because PagedIterable implements Iterable , it also has an iterator() method to
allow for the Java iterator programming style, as show in the following example:
Java
Java
Iterable<PagedResponse<Secret>> secretPages =
client.listSecrets().iterableByPage();
for (PagedResponse<Secret> page : secretPages) {
System.out.println("Response code: " + page.getStatusCode());
System.out.println("Continuation Token: " + page.getContinuationToken());
page.getElements().forEach(secret -> System.out.println("Secret value: "
+ secret))
}
There's also an iterableByPage overload that accepts a continuation token. You can call
this overload when you want to return to the same point of iteration at a later time.
Use Stream
The following example shows how the streamByPage() method performs the same
operation as shown above. This API also has a continuation token overload for returning
to the same point of iteration at a later time.
Java
client.listSecrets()
.streamByPage()
.forEach(page -> {
System.out.println("Response code: " + page.getStatusCode());
System.out.println("Continuation Token: " +
page.getContinuationToken());
page.getElements().forEach(secret -> System.out.println("Secret
value: " + secret))
});
Asynchronously observe pages and individual
elements
This section covers the asynchronous APIs. In async APIs, the network calls happen in a
different thread than the main thread that calls subscribe() . What this means is that the
main thread may terminate before the result is available. It's up to you to ensure that
the application doesn't exit before the async operation has had time to complete.
Java
asyncClient.listSecrets()
.subscribe(secret -> System.out.println("Secret value: " + secret),
ex -> System.out.println("Error listing secrets: " +
ex.getMessage()),
() -> System.out.println("Successfully listed all secrets"));
Observe pages
The following example shows how the PagedFlux API lets you observe each page
asynchronously, again by using a byPage() API and by providing a consumer, error
consumer, and a completion consumer.
Java
asyncClient.listSecrets().byPage()
.subscribe(page -> {
System.out.println("Response code: " + page.getStatusCode());
System.out.println("Continuation Token: " +
page.getContinuationToken());
page.getElements().forEach(secret -> System.out.println("Secret
value: " + secret))
},
ex -> System.out.println("Error listing pages with secret: " +
ex.getMessage()),
() -> System.out.println("Successfully listed all pages with
secret"));
Next steps
Now that you're familiar with pagination and iteration in the Azure SDK for Java,
consider reviewing Long-running operations in the Azure SDK for Java. Long-running
operations are operations that run for a longer duration than most normal HTTP
requests, typically because they require some effort on the server side.
Feedback
Was this page helpful? Yes No
This article provides an overview of using long-running operations with the Azure SDK
for Java.
Certain operations on Azure may take extended amounts of time to complete. These
operations are outside the standard HTTP style of quick request / response flow. For
example, copying data from a source URL to a Storage blob, or training a model to
recognize forms, are operations that may take a few seconds to several minutes. Such
operations are referred to as Long-Running Operations, and are often abbreviated as
'LRO'. An LRO may take seconds, minutes, hours, days, or longer to complete,
depending on the operation requested and the process that must be performed on the
server side.
In the Java client libraries for Azure, a convention exists that all long-running operations
begin with the begin prefix. This prefix indicates that this operation is long-running, and
that the means of interaction with this operation is slightly different that the usual
request / response flow. Along with the begin prefix, the return type from the operation
is also different than usual, to enable the full range of long-running operation
functionality. As with most things in the Azure SDK for Java, there are both synchronous
and asynchronous APIs for long-running operations:
Both SyncPoller and PollerFlux are the client-side abstractions intended to simplify
the interaction with long-running server-side operations. The rest of this article outlines
best practices when working with these types.
Java
SyncPoller<UploadBlobProgress, UploadedBlobProperties> poller =
syncClient.beginUploadFromUri(<URI to upload from>)
PollResponse<UploadBlobProgress> response;
do {
response = poller.poll();
System.out.println("Status of long running upload operation: " +
response.getStatus());
Duration pollInterval = response.getRetryAfter();
TimeUnit.MILLISECONDS.sleep(pollInterval.toMillis());
} while (!response.getStatus().isComplete());
This example uses the poll() method on the SyncPoller to retrieve information on
progress of the long-running operation. This code prints the status to the console, but a
better implementation would make relevant decisions based on this status.
The getRetryAfter() method returns information about how long to wait before the
next poll. Most Azure long-running operations return the poll delay as part of their
HTTP response (that is, the commonly used retry-after header). If the response
doesn't contain the poll delay, then the getRetryAfter() method returns the duration
given at the time of invoking the long-running operation.
The example above uses a do..while loop to repeatedly poll until the long-running
operation is complete. If you aren't interested in these intermediate results, you can
instead call waitForCompletion() . This call will block the current thread until the long-
running operation completes and returns the last poll response:
Java
If the last poll response indicates that the long-running operation has completed
successfully, you can retrieve the final result using getFinalResult() :
Java
if (LongRunningOperationStatus.SUCCESSFULLY_COMPLETED ==
response.getStatus()) {
UploadedBlobProperties result = poller.getFinalResult();
}
The async API returns a PollerFlux immediately, but the long-running operation itself
won't start until you subscribe to the PollerFlux . This process is how all Flux -based
APIs operate. The following example shows an async long-running operation:
Java
asyncClient.beginUploadFromUri(...)
.subscribe(response -> System.out.println("Status of long running upload
operation: " + response.getStatus()));
In the following example, you'll get intermittent status updates on the long-running
operation. You can use these updates to determine whether the long-running operation
is still operating in the expected fashion. This example prints the status to the console,
but a better implementation would make relevant error handling decisions based on this
status.
If you aren't interested in the intermediate status updates and just want to get notified
of the final result when it arrives, you can use code similar to the following example:
Java
asyncClient.beginUploadFromUri(...)
.last()
.flatMap(response -> {
if (LongRunningOperationStatus.SUCCESSFULLY_COMPLETED ==
response.getStatus()) {
return response.getFinalResult();
}
return Mono.error(new IllegalStateException("Polling completed
unsuccessfully with status: "+ response.getStatus()));
})
.subscribe(
finalResult -> processFormPages(finalResult),
ex -> countDownLatch.countDown(),
() -> countDownLatch.countDown());
In this code, you retrieve the final result of the long-running operation by calling
last() . This call tells the PollerFlux that you want to wait for all the polling to
complete, at which point the long-running operation has reached a terminal state, and
you can inspect its status to determine the outcome. If the poller indicates that the
long-running operation has completed successfully, you can retrieve the final result and
pass it on to the consumer in the subscribe call.
Next steps
Now that you're familiar with the long-running APIs in the Azure SDK for Java, see
Configure proxies in the Azure SDK for Java to learn how to customize the HTTP client
further.
Feedback
Was this page helpful? Yes No
This article provides an overview of how to configure the Azure SDK for Java to make
proper use of proxies.
Each method of supplying a proxy has its own pros and cons and provides different
levels of encapsulation. When you've configured a proxy for an HttpClient , it will use
the proxy for the rest of its lifetime. Having the proxy tied to an individual HttpClient
allows an application to use multiple HttpClient instances where each can use a
different proxy to fulfill an application's proxying requirements.
When the builder inspects the environment, it will search for the following environment
configurations in the order specified:
1. HTTPS_PROXY
2. HTTP_PROXY
3. https.proxy*
4. http.proxy*
The * represents the well-known Java proxy properties. For more information, see Java
Networking and Proxies in the Oracle documentation.
) Important
To use any proxy configuration, Java requires you to set the system environment
property java.net.useSystemProxies to true .
You can also create an HTTP client instance that doesn't use any proxy configuration
present in the system environment variables. To override the default behavior, you
explicitly set a differently-configured Configuration in the HTTP client builder. When
you set a Configuration in the builder, it will no longer call
Configuration.getGlobalConfiguration() . For example, if you call
The following example uses the HTTP_PROXY environment variable with value
localhost:8888 to use Fiddler as the proxy. This code demonstrates creating a Netty
and an OkHttp HTTP client. (For more information on HTTP client configuration, see
HTTP clients and pipelines.)
Bash
export HTTP_PROXY=localhost:8888
Java
To prevent the environment proxy from being used, configure the HTTP client builder
with Configuration.NONE , as shown in the following example:
Java
Java
Java
Java
// Fiddler uses username "1" and password "1" with basic authentication as
its proxy authentication requirement.
ProxyOptions proxyOptions = new ProxyOptions(ProxyOptions.Type.HTTP, new
InetSocketAddress("localhost", 8888))
.setCredentials("1", "1");
You can configure HTTP client builders with ProxyOptions directly to indicate an explicit
proxy to use. This configuration is the most granular way to provide a proxy, and
generally isn't as flexible as passing a Configuration that you can mutate to update
proxying requirements.
Java
Next steps
Now that you're familiar with proxy configuration in the Azure SDK for Java, see
Configure tracing in the Azure SDK for Java to better understand flows within your
application, and to help diagnose issues.
Feedback
Was this page helpful? Yes No
This article provides an overview of how to configure the Azure SDK for Java to integrate
tracing functionality.
You can enable tracing in Azure client libraries by using and configuring the
OpenTelemetry SDK or using an OpenTelemetry-compatible agent. OpenTelemetry is a
popular open-source observability framework for generating, capturing, and collecting
telemetry data for cloud-native software.
There are two key concepts related to tracing: span and trace. A span represents a
single operation in a trace. A span can represent an HTTP request, a remote procedure
call (RPC), a database query, or even the path that your code takes. A trace is a tree of
spans showing the path of work through a system. You can distinguish a trace on its
own by a unique 16-byte sequence called a TraceID. For more information on these
concepts and how they relate to OpenTelemetry, see the OpenTelemetry
documentation .
For more details on how to configure exporters, add manual instrumentation, or enrich
telemetry, see OpenTelemetry Instrumentation for Java .
7 Note
OpenTelemetry agent artifact is stable, but does not provide over-the-wire
telemetry stability guarantees, which may cause span names and attribute names
produced by Azure SDK that might change over time if you update the agent. For
more information, see Compatibility requirements .
XML
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-core-tracing-opentelemetry</artifactId>
</dependency>
If you run the application now, you should get Azure SDK spans on your backend.
However with asynchronous calls, the correlation between Azure SDK and application
spans may be broken.
In the following example, when an incoming web request is traced manually, the
Application Configuration Client Library is called asynchronously in the scope of this
request.
Java
// You could also pass the context using the reactor `contextWrite` method
under the same `trace-context` key.
appConfigAsyncClient.setConfigurationSettingWithResponse(settings)
.contextWrite(reactor.util.context.Context.of("trace-context",
traceContext))
//...
To find out which spans and attributes the SDK emits, see the Azure SDK semantic
conventions specification . Azure SDK (and OpenTelemetry) semantic conventions are
not stable and may change in the future.
Next steps
Now that you're familiar with the core cross-cutting functionality in the Azure SDK for
Java, see Azure authentication with Java and Azure Identity to learn how you can create
secure applications.
Feedback
Was this page helpful? Yes No
f QUICKSTART
Step-by-step tutorials
g TUTORIAL
c HOW-TO GUIDE
Redis caches
Storage accounts
Virtual machines
Supported JDKs
a DOWNLOAD
i REFERENCE
f QUICKSTART
Step-by-step tutorials
g TUTORIAL
c HOW-TO GUIDE
Redis caches
Storage accounts
Virtual machines
Supported JDKs
a DOWNLOAD
Source code
i REFERENCE
Spring Cloud Azure is an open source project that helps make it easier to use Azure
services in Spring applications.
Spring Cloud Azure is an open source project, with all resources available to the public.
The following list provides links to these resources:
XML
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-security-keyvault-secrets</artifactId>
<version>4.5.2</version>
</dependency>
Java
Java
@ConfigurationProperties("azure.keyvault")
public class KeyVaultProperties {
private String vaultUrl;
private String tenantId;
private String clientId;
private String clientSecret;
Java
@SpringBootApplication
@EnableConfigurationProperties(KeyVaultProperties.class)
public class SecretClientApplication implements CommandLineRunner {
private KeyVaultProperties properties;
5. Add the necessary properties to your application.yml file, as shown in the following
example:
YAML
azure:
keyvault:
vault-url:
tenant-id:
client-id:
client-secret:
XML
<dependencies>
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter-keyvault-
secrets</artifactId>
</dependency>
</dependencies>
2. Use a bill of materials (BOM) to manage the Spring Cloud Azure version, as shown
in the following example:
XML
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-dependencies</artifactId>
<version>5.15.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
7 Note
YAML
spring:
cloud:
azure:
keyvault:
secret:
endpoint:
4. Sign in with Azure CLI by using the following command. Your credentials will then
be provided by Azure CLI, so there will be no need to add other credential
information such as client-id and client-secret .
Azure CLI
az login
5. Auto-wire SecretClient in the relevant places, as shown in the following example:
Java
@SpringBootApplication
public class SecretClientApplication implements CommandLineRunner {
@Override
public void run(String... args) {
System.out.println("sampleProperty: " +
secretClient.getSecret("sampleProperty").getValue());
}
}
Spring Cloud Azure will provide some other features besides the auto-configured
SecretClient . For example, you can use @Value to get the secret value, as shown in the
following example:
Java
@SpringBootApplication
public class PropertySourceApplication implements CommandLineRunner {
@Value("${sampleProperty1}")
private String sampleProperty1;
Microsoft Entra ID
Provides integration support for Spring Security with Microsoft Entra ID for
authentication. For more information, see Spring Cloud Azure support for Spring
Security.
Azure Storage
Provides Spring Boot support for Azure Storage services. For more information, see
Spring Cloud Azure resource handling.
Get support
If you need support for Spring Cloud Azure, you can ask for help in the following ways:
Create Azure support tickets. Customers with an Azure support plan can open an
Azure support ticket . We recommend this option if your problem requires
immediate attention.
File GitHub issues in the Azure/azure-sdk-for-java repository . We use GitHub
issues to track bugs, questions, and feature requests. GitHub issues are free, but
the response time isn't guaranteed. For more information, see GitHub issues
support process .
Next steps
Tutorial: Read a secret from Azure Key Vault in a Spring Boot application
Secure REST API using Spring Security 5 and Microsoft Entra ID
How to use the Spring Boot Starter with Azure Cosmos DB for NoSQL
Feedback
Was this page helpful? Yes No
This article introduces many troubleshooting tools available to you when you use the
Azure SDK for Java, and links to other articles with further details.
The Azure SDK for Java consists of many client libraries - one or more for each Azure
Service that exists. We ensure that all client libraries are built to a consistent, high
standard, with common patterns for configuration, logging, exception handling, and
troubleshooting. For more information, see Use the Azure SDK for Java.
Because troubleshooting can span such a wide subject area, we've developed the
following troubleshooting guides that you may want to review:
Beyond these documents, the following content provides guidance on making the best
use of logging and exception handling as it relates to the Azure SDK for Java.
Java
This code changes the HTTP request/response logging for a single client instance.
Alternatively, you can configure logging HTTP requests and responses for your entire
application by setting the AZURE_HTTP_LOG_DETAIL_LEVEL environment variable to one of
the values in the following table. It's important to note that this change enables logging
for every Azure client that supports logging HTTP request/response.
ノ Expand table
basic Logs only URLs, HTTP methods, and time to finish the request.
headers Logs everything in BASIC, plus all the request and response headers.
body Logs everything in BASIC, plus all the request and response body.
7 Note
When you log request and response bodies, ensure that they don't contain
confidential information. When you log query parameters and headers, the client
library has a default set of query parameters and headers that are considered safe
to log. It's possible to add additional query parameters and headers that are safe to
log, as shown in the following example:
Java
clientBuilder.httpLogOptions(new HttpLogOptions()
.addAllowedHeaderName("safe-to-log-header-name")
.addAllowedQueryParamName("safe-to-log-query-parameter-name"))
The following example shows you how to catch this exception with a synchronous client:
Java
try {
ConfigurationSetting setting = new
ConfigurationSetting().setKey("myKey").setValue("myValue");
client.getConfigurationSetting(setting);
} catch (HttpResponseException e) {
System.out.println(e.getMessage());
// Do something with the exception
}
With asynchronous clients, you can catch and handle exceptions in the error callbacks,
as shown in the following example:
Java
For more information on how to enable tracing in the Azure SDK for Java, see Configure
tracing in the Azure SDK for Java.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article covers failure investigation techniques, common errors for the credential
types in the Azure Identity Java client library, and mitigation steps to resolve these
errors. Because there are many credential types available in the Azure SDK for Java,
we've split the troubleshooting guide into sections based on usage scenario. The
following sections are available:
The remainder of this article covers general troubleshooting techniques and guidance
that apply to all credential types.
ClientAuthenticationException
Any service client method that makes a request to the service can raise exceptions
arising from authentication errors. These exceptions are possible because the token is
requested from the credential on the first call to the service and on any subsequent
requests to the service that need to refresh the token.
To distinguish these failures from failures in the service client, Azure Identity classes raise
ClientAuthenticationException with details describing the source of the error in the
exception message and possibly the error message. Depending on the application, these
errors may or may not be recoverable. The following code shows an example of catching
ClientAuthenticationException :
Java
// Create a secret client using the DefaultAzureCredential
SecretClient client = new SecretClientBuilder()
.vaultUrl("https://myvault.vault.azure.net/")
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
try {
KeyVaultSecret secret = client.getSecret("secret1");
} catch (ClientAuthenticationException e) {
//Handle Exception
e.printStackTrace();
}
CredentialUnavailableException
CredentialUnavailableException is a special exception type derived from
Permission issues
Calls to service clients resulting in HttpResponseException with a StatusCode of 401 or
403 often indicate the caller doesn't have sufficient permissions for the specified API.
Check the service documentation to determine which roles are needed for the specific
request. Ensure that the authenticated user or service principal has been granted the
appropriate roles on the resource.
credential is authenticating. These errors can include errors received from requests to
the Microsoft Entra security token service (STS) and often contain information helpful to
diagnosis. Consider the following ClientAuthenticationException message:
Output
Original exception:
AADSTS7000215: Invalid client secret provided. Ensure the secret being sent
in the request is the client secret value, not the client secret ID, for a
secret added to app 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'.
Trace ID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
Correlation ID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
Timestamp: 2022-01-01 00:00:00Z
Failing credential type: The type of credential that failed to authenticate - in this
case, ClientSecretCredential . This information is helpful when diagnosing issues
with chained credential types, such as DefaultAzureCredential or
ChainedTokenCredential .
STS error code and message: The error code and message returned from the
Microsoft Entra STS - in this case, AADSTS7000215: Invalid client secret
provided. This information can give insight into the specific reason the request
failed. For instance, in this specific case, because the provided client secret is
incorrect. For more information on STS error codes, see the AADSTS error codes
section of Microsoft Entra authentication and authorization error codes.
The underlying MSAL library, MSAL4J , also has detailed logging. This logging is highly
verbose and includes all personal data including tokens. This logging is most useful
when working with product support. As of v1.10.0, credentials that offer this logging
have a method called enableUnsafeSupportLogging() .
U Caution
Requests and responses in the Azure Identity library contain sensitive information.
You must take precautions to protect logs when customizing the output to avoid
compromising account security.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article provides guidance on dealing with issues encountered when authenticating
Azure SDK for Java applications hosted on Azure, through various TokenCredential
implementations. For more information, see Authenticate Azure-hosted Java
applications.
Troubleshoot DefaultAzureCredential
When you use DefaultAzureCredential , you can optionally try/catch for
CredentialUnavailableException . The following table shows the errors that this
ノ Expand table
Troubleshoot EnvironmentCredential
When you use EnvironmentCredential , you can optionally try/catch for
CredentialUnavailableException . The following table shows the errors that this
ノ Expand table
Environment A valid Ensure that the appropriate environment variables are set
variables combination of prior to application startup for the intended
aren't fully environment authentication method, as described in the following list:
configured. variables wasn't - To authenticate a service principal using a client secret,
set. ensure that the variables AZURE_CLIENT_ID ,
AZURE_TENANT_ID , and AZURE_CLIENT_SECRET are properly
set.
- To authenticate a service principal using a certificate,
ensure that the variables AZURE_CLIENT_ID ,
AZURE_TENANT_ID , AZURE_CLIENT_CERTIFICATE_PATH , and
optionally AZURE_CLIENT_CERTIFICATE_PASSWORD are
properly set.
- To authenticate a user using a password, ensure that the
variables AZURE_USERNAME and AZURE_PASSWORD are
properly set.
Troubleshoot ManagedIdentityCredential
ManagedIdentityCredential is designed to work on various Azure hosts that provide
managed identity. Configuring the managed identity and troubleshooting failures varies
from host to host. The following list shows the Azure host environments that you can
assign a managed identity and that ManagedIdentityCredential supports:
ノ Expand table
The requested The Azure Instance If you're using a user-assigned identity, ensure
identity hasn't Metadata Service (IMDS) that the specified clientId is correct.
been assigned to endpoint responded with a
this resource. status code of 400, If you're using a system assigned identity, make
indicating the requested sure that you've enabled it properly. For more
identity isn't assigned to information, see the Enable system-assigned
the virtual machine (VM). managed identity on an existing VM section of
Configure managed identities for Azure
resources on a VM using the Azure portal.
The request The request to the IMDS IMDS doesn't support calls via proxy or gateway.
failed due to a endpoint failed due to a Disable proxies or gateways running on the VM
gateway error. gateway error, 502 or 504 for calls to the IMDS endpoint
status code. http://169.254.169.254/
Multiple Retries to retrieve a token - For more information on specific failures, see
attempts failed from the IMDS endpoint the inner exception messages. If the data has
to obtain a have been exhausted. been truncated, more detail can be obtained by
token from the collecting logs.
Error message Description Mitigation
Bash
curl 'http://169.254.169.254/metadata/identity/oauth2/token?
resource=https://management.core.windows.net&api-version=2018-02-01' -H
"Metadata: true"
2 Warning
The output of this command contains a valid access token. To avoid compromising
account security, don't share this access token.
ノ Expand table
Bash
curl "$IDENTITY_ENDPOINT?resource=https://management.core.windows.net&api-
version=2019-08-01" -H "X-IDENTITY-HEADER: $IDENTITY_HEADER"
2 Warning
The output of this command contains a valid access token. To avoid compromising
account security, don't share this access token.
ノ Expand table
No Managed The application Verify that the pod is labeled correctly. This problem also
Identity attempted to occurs when a correctly labeled pod authenticates before
endpoint authenticate before the identity is ready. To prevent initialization races,
found an identity was configure NMI to set the Retry-After header in its
assigned to its pod. responses. For more information, see Set Retry-After
Error Description Mitigation
message
Troubleshoot WorkloadIdentityCredential
When you use WorkloadIdentityCredential , you can optionally try/catch for
CredentialUnavailableException . The following table shows the errors that this
ノ Expand table
If you're using
WorkloadIdentityCredential ,
then:
- Ensure that the tenant ID is
specified via the tenantId
setter on the credential builder
or the AZURE_TENANT_ID
environment variable.
- Ensure that the client ID is
specified via the clientId
setter on the credential builder
Error message Description Mitigation
or the AZURE_CLIENT_ID
environment variable.
- Ensure that the token file path
is specified via the
tokenFilePath setter on the
credential builder or the
AZURE_FEDERATED_TOKEN_FILE
environment variable.
- For other issues, see the
product troubleshooting
guide .
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article provides guidance on dealing with issues encountered when authenticating
Azure SDK for Java applications running locally on developer machines, through various
TokenCredential implementations. For more information, see Azure authentication in
Troubleshoot AzureCliCredential
When you use AzureCliCredential , you can optionally try/catch for
CredentialUnavailableException . The following table shows the errors that this
ノ Expand table
Azure CLI not The Azure CLI isn't installed or - Ensure that you've properly installed
installed couldn't be found. the Azure CLI.
- Validate that the installation location
has been added to the PATH
environment variable.
Please run 'az No account is currently signed - Sign in to the Azure CLI using the az
login' to set up into the Azure CLI, or the sign- login command. For more information,
account in has expired. see Sign in with Azure CLI.
- Validate that the Azure CLI can obtain
tokens. For more information, see the
next section.
Azure CLI
az account show
After you've verified the Azure CLI is using correct account, use the following command
to validate that it's able to obtain tokens for this account:
Azure CLI
az account get-access-token \
--output json \
--resource https://management.core.windows.net
2 Warning
The output of this command contains a valid access token. To avoid compromising
account security, don't share this access token.
Troubleshoot AzureDeveloperCliCredential
When you use AzureDeveloperCliCredential , you can optionally try/catch for
CredentialUnavailableException . The following table shows the errors that this
ノ Expand table
Azure Developer CLI The Azure Developer CLI isn't - Ensure that you've properly installed
not installed installed or couldn't be found. the Azure Developer CLI.
- Validate that the installation location
has been added to the PATH
environment variable.
Please run 'azd No account is currently signed in - Sign in to the Azure Developer CLI
auth login' to set to the Azure Developer CLI, or using the azd auth login command.
up account the sign-in has expired. - Validate that the Azure Developer
CLI can obtain tokens. For more
information, see the next section.
Bash
azd config list
After you've verified the Azure Developer CLI is using correct account, you can use the
following command to validate that it's able to obtain tokens for this account:
Bash
2 Warning
The output of this command contains a valid access token. To avoid compromising
account security, don't share this access token.
Troubleshoot AzurePowerShellCredential
When you use AzurePowerShellCredential , you can optionally try/catch for
CredentialUnavailableException . The following table shows the errors that this
ノ Expand table
Az.Account module The Az.Account module Install the latest Az.Account module.
>= 2.2.0 isn't needed for authentication in For more information, see How to
installed. Azure PowerShell isn't installed. install Azure PowerShell.
Please run No account is currently signed - Sign in to Azure PowerShell using the
'Connect-AzAccount' in to Azure PowerShell. Connect-AzAccount command. For more
to set up account. information, see Sign in with Azure
PowerShell
- Validate that Azure PowerShell can
obtain tokens. For more information,
see the next section.
PowerShell
Get-AzContext
Output
Name Account
SubscriptionName Environment TenantId
---- ------- ---------------
- ----------- --------
Subscription1 (xxxxxxxx-xxxx-xxxx-xxx... test@outlook.com Subscription1
AzureCloud xxxxxxxx-x...
After you've verified Azure PowerShell is using correct account, you can use the
following command to validate that it's able to obtain tokens for this account.
PowerShell
2 Warning
The output of this command contains a valid access token. To avoid compromising
account security, don't share this access token.
Troubleshoot VisualStudioCodeCredential
7 Note
ノ Expand table
Failed To Read VS Code No Azure account information was - Ensure that you've
Credentials</p> found in the VS Code properly installed the Azure
</p>OR</p>Authenticate via configuration. Account plugin .
Azure Tools plugin in VS - Use View > Command
Code Palette to execute the
Azure: Sign In command.
This command opens a
browser window and
displays a page that allows
you to sign in to Azure.
- If you already have the
Azure Account extension
installed and have signed in
to your account, try logging
out and logging in again.
This action repopulates the
cache and potentially
mitigates the error you're
getting.
ADFS tenant not supported Visual Studio Azure Service Use credentials from a
Authentication doesn't currently supported cloud when
support ADFS tenants. authenticating with Visual
Studio. For more
information about the
supported clouds, see
National clouds.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article provides guidance on dealing with issues encountered when authenticating
Azure SDK for Java applications via service principal, through various TokenCredential
implementations. For more information, see Azure authentication with service principal.
Troubleshoot ClientSecretCredential
When you use ClientSecretCredential , you can optionally try/catch for
ClientAuthenticationException . The following table shows the errors that this exception
ノ Expand table
AADSTS7000215 An invalid client Ensure that the clientSecret provided when constructing
secret was the credential is valid. If unsure, create a new client secret
provided. using the Azure portal. For more information, see the
Create a new application secret section of Create a
Microsoft Entra application and service principal that can
access resources.
AADSTS7000222 An expired client Create a new client secret using the Azure portal. For more
secret was information, see the Create a new application secret
provided. section of Create a Microsoft Entra application and service
principal that can access resources.
AADSTS700016 The specified Ensure the specified clientId and tenantId are correct for
application wasn't your application registration. For multi-tenant apps, ensure
found in the that a tenant admin has added the application to the
specified tenant. desired tenant. For more information, see Create a
Microsoft Entra application and service principal that can
access resources.
Troubleshoot ClientCertificateCredential
When you use ClientCertificateCredential , you can optionally try/catch for
ClientAuthenticationException . The following table shows the errors that this exception
AADSTS700027 Client assertion Ensure that you've uploaded the specified certificate to the
contains an invalid Microsoft Entra application registration. For more
signature. information, see the Upload a trusted certificate issued by
a certificate authority section of Create a Microsoft Entra
application and service principal that can access resources.
AADSTS700016 The specified Ensure that the specified clientId and tenantId are
application wasn't correct for your application registration. For multi-tenant
found in the apps, ensure that a tenant admin has added the
specified tenant. application to the desired tenant. For more information,
see Create a Microsoft Entra application and service
principal that can access resources.
Troubleshoot ClientAssertionCredential
When you use ClientAssertionCredential , you can optionally try/catch for
ClientAuthenticationException . The following table shows the errors that this exception
ノ Expand table
AADSTS700021 The client Ensure that the JWT assertion created has the correct values
assertion specified for the sub and issuer value of the payload. Both of
application these fields should be equal to clientId . For the client
identifier assertion format, see Microsoft identity platform application
doesn't match authentication certificate credentials.
the client_id
parameter.
AADSTS700023 The client Ensure that the audience aud field in the JWT assertion created
assertion has the correct value for the audience specified in the payload.
audience claim Set this field to
doesn't match https://login.microsoftonline.com/{tenantId}/v2 .
the Realm
issuer.
AADSTS50027 JWT token is Ensure that the JWT assertion token is in the valid format. For
invalid or more information, see Microsoft identity platform application
malformed. authentication certificate credentials.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
When you use credentials in a multi-tenant context, you can optionally try/catch for
ClientAuthenticationException . The following table shows the errors that this exception
ノ Expand table
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article covers failure investigation techniques, common errors for the credential
types in the Event Hubs library, and mitigation steps to resolve these errors. In addition
to the general troubleshooting techniques and guidance that apply regardless of the
Event Hubs use case, the following articles cover specific features of the Event Hubs
library:
The remainder of this article covers general troubleshooting techniques and guidance
that apply to all users of the Event Hubs library.
The recommended way to solve the specific exception the AMQP exception represents
is to follow the Event Hubs Messaging Exceptions guidance.
getErrorCondition: The underlying AMQP error. For a description of the errors, see
the AmqpErrorCondition Enum documentation or the OASIS AMQP 1.0 spec .
isTransient: A value that indicates whether trying to perform the same operation is
possible. SDK clients apply the retry policy when the error is transient.
getErrorContext: Contains the following information about where the AMQP error
originated:
LinkErrorContext: Errors that occur in either the send or receive link.
SessionErrorContext: Errors that occur in the session.
AmqpErrorContext: Errors that occur in the connection or a general AMQP error.
When the connection to Event Hubs is idle, the service disconnects the client after some
time. This issue isn't a problem because the clients re-establish a connection when a
service operation is requested. For more information, see AMQP errors in Azure Service
Bus.
Permission issues
An AmqpException with an AmqpErrorCondition of amqp:unauthorized-access means
that the provided credentials don't allow for performing the action (receiving or
sending) with Event Hubs. To resolve this issue, try the following tasks:
Double check that you have the correct connection string. For more information,
see Get an Event Hubs connection string.
Ensure that your shared access signature (SAS) token is generated correctly. For
more information, see Authorizing access to Event Hubs resources using Shared
Access Signatures.
For other possible solutions, see Troubleshoot authentication and authorization issues
with Event Hubs.
Connectivity issues
Verify that the connection string or fully qualified domain name specified when
creating the client is correct. For more information, see Get an Event Hubs
connection string.
Check the firewall and port permissions in your hosting environment and verify
that the AMQP ports 5671 and 5762 are open.
Make sure that the endpoint is allowed through the firewall.
Try using WebSockets, which connects on port 443. For more information, see the
PublishEventsWithWebSocketsAndProxy.java sample.
See if your network is blocking specific IP addresses. For more information, see
What IP addresses do I need to allow?
If applicable, check the proxy configuration. For more information, see the
PublishEventsWithWebSocketsAndProxy.java sample.
For more information about troubleshooting network connectivity, see
Troubleshoot connectivity issues - Azure Event Hubs.
To use the same AMQP connection when creating multiple clients, you can use the
EventHubClientBuilder.shareConnection() flag, hold a reference to that
EventHubClientBuilder , and create new clients from that same builder instance.
Add "TransportType=AmqpWebSockets"
To use web sockets, see the PublishEventsWithSocketsAndProxy.java sample.
For more information about the Azure.Identity library, check out our Authentication
and the Azure SDK blog post.
In addition to enabling logging, setting the log level to VERBOSE or DEBUG provides
insights into the library's state. The following sections show sample log4j2 and logback
configurations to reduce the excessive messages when verbose logging is enabled.
Configure Log4J 2
Use the following steps to configure Log4J 2:
1. Add the dependencies in your pom.xml using the ones from the logging sample
pom.xml , in the "Dependencies required for Log4j2" section.
2. Add log4j2.xml to your src/main/resources folder.
Configure logback
Use the following steps to configure logback:
1. Add the dependencies in your pom.xml using the ones from the logging sample
pom.xml , in the "Dependencies required for logback" section.
2. Add logback.xml to your src/main/resources folder.
To trace the AMQP transport frames, set the PN_TRACE_FRM=1 environment variable.
The following configuration file logs TRACE level output from Proton-J to the proton-
trace.log file:
properties
handlers=java.util.logging.FileHandler
.level=OFF
proton.trace.level=ALL
java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.pattern=proton-trace.log
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=[%1$tF %1$tr] %3$s %4$s: %5$s %n
Reduce logging
One way to decrease logging is to change the verbosity. Another way is to add filters
that exclude logs from logger names packages like com.azure.messaging.eventhubs or
com.azure.core.amqp . For examples, see the XML files in the Configuring Log4J 2 and
When you submit a bug, the log messages from classes in the following packages are
interesting:
com.azure.core.amqp.implementation
com.azure.core.amqp.implementation.handler
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article provides solutions to common problems that you might encounter when you
use the EventHubsProducerClient and EventHubsProducerAsyncClient types. If you're
looking for solutions to other common problems that you might encounter when you
use Event Hubs, see Troubleshoot Azure Event Hubs.
By design, Event Hubs doesn't promote the Kafka message key to be the Event Hubs
partition key nor the reverse because with the same value, the Kafka client and the Event
Hubs client likely send the message to two different partitions. It might cause some
confusion if we set the value in the cross-protocol communication case. Exposing the
properties with a protocol specific key to the other protocol client should be good
enough.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article provides solutions to common problems that you might encounter when you use
the EventProcessorClient type. If you're looking for solutions to other common problems that
you might encounter when you use Azure Event Hubs, see Troubleshoot Azure Event Hubs.
Partition ownership is determined via the ownership records in the CheckpointStore . On every
load balancing interval, the EventProcessorClient will perform the following tasks:
4. Update the ownership record for the partitions it owns that have an active link to that
partition.
You can configure the load balancing and ownership expiration intervals when you create the
EventProcessorClient via the EventProcessorClientBuilder , as described in the following list:
notices that the ownership record has not been updated in the last 2 min or by 9:32am, it will
consider the partition unowned.
If an error occurs in one of the partition consumers, it will close the corresponding consumer
but will not try to reclaim it until the next load balancing cycle.
Output
New receiver 'nil' with higher epoch of '0' is created hence current receiver
'nil' with epoch '0'
is getting disconnected. If you are recreating the receiver, make sure a higher
epoch is used.
TrackingId:<GUID>,
SystemTracker:<NAMESPACE>:eventhub:<EVENT_HUB_NAME>|<CONSUMER_GROUP
>,
Timestamp:2022-01-01T12:00:00}"}
This error is expected when load balancing occurs after EventProcessorClient instances are
added or removed. Load balancing is an ongoing process. When you use the
BlobCheckpointStore with your consumer, every ~30 seconds (by default), the consumer
checks to see which consumers have a claim for each partition, then runs some logic to
determine whether it needs to 'steal' a partition from another consumer. The service
mechanism used to assert exclusive ownership over a partition is known as the Epoch.
However, if no instances are being added or removed, there's an underlying issue that should
be addressed. For more information, see the Partition ownership changes frequently section
and Filing GitHub issues .
You shouldn't specify -Xmx as a value larger than the memory available or limit set for the host
(the VM or container) - for example, the memory requested in the container's configuration.
You should allocate enough memory for the host to support the Java heap.
The following steps describe a typical way to measure the value for max Java Heap:
1. Run the application in an environment close to production, where the application sends,
receives, and processes events under the peak load expected in production.
2. Wait for the application to reach a steady state. At this stage, the application and JVM
would have loaded all domain objects, class types, static instances, object pools (TCP, DB
connection pools), etc.
Under the steady state, you see the stable sawtooth-shaped pattern for the heap
collection, as shown in the following screenshot:
3. After the application reaches the steady state, force a full garbage collection (GC) using
tools like JConsole. Observe the memory occupied after the full GC. You want to size the
heap such that only 30% is occupied after the full GC. You can use this value to set the
max heap size (using -Xmx ).
If you're on the container, then size the container to have an extra ~1 GB of memory for the
non-heap need for the JVM instance.
designed for advanced users who require greater control and flexibility over their Reactive
applications. This client offers a low-level interface, enabling users to manage backpressure,
threading, and recovery within the Reactor chain. Unlike EventProcessorClient ,
EventHubConsumerAsyncClient doesn't include automatic recovery mechanisms for all terminal
causes. Therefore, users must handle terminal events and select appropriate Reactor operators
to implement recovery strategies.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you use the
Azure SDK for Java client libraries, we recommended that you file an issue in the Azure SDK
for Java GitHub repository .
Troubleshoot Azure Event Hubs
performance
Article • 04/02/2025
This article provides solutions to common performance problems that you might
encounter when you use the Event Hubs library in the Azure SDK for Java. If you're
looking for solutions to other common problems that you might encounter when you
use Event Hubs, see Troubleshooting Azure Event Hubs.
If the event hub has high traffic and high throughput is expected, the aggregated cost
of continuously calling your callback hinders performance of EventProcessorClient . In
this case, you should use processEventBatch .
For each partition, your callback is invoked one at a time. High processing time in the
callback hinders performance because the EventProcessorClient doesn't continue to
push more events downstream nor request more EventData instances from the Event
Hubs service.
Costs of checkpointing
When you use Azure Blob Storage as the checkpoint store, there's a network cost to
checkpointing because it makes an HTTP request and waits for a response. This process
could take up to several seconds due to network latency, the performance of Azure Blob
Storage, resource location, and so on.
Use LoadBalancingStrategy.BALANCED or
LoadBalancingStrategy.GREEDY
When you use LoadBalancingStrategy.BALANCED , the EventProcessorClient claims one
partition for every load balancing cycle. If there are 32 partitions in an event hub, it takes
32 load-balancing iterations to claim all the partitions. If users know a set number of
EventProcessorClient instances are running, they can use
balancing cycle.
Configure prefetchCount
The default prefetch value is 500. When the AMQP receive link is opened, it places 500
credits on the link. Assuming that each EventData instance is one link credit,
EventProcessorClient prefetches 500 EventData instances. When all the events are
consumed, the processor client adds 500 credits to the link to receive more messages.
This flow repeats while the EventProcessorClient still has ownership of a partition.
Configuring prefetchCount may have performance implications if the number is too low.
Each time the AMQP receive link places credits, the remote service sends an ACK. For
high throughput scenarios, the overhead of making thousands of client requests and
service ACKs may hinder performance.
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
Provide product feedback | Get help at Microsoft Q&A
Troubleshoot Azure Service Bus
Article • 02/12/2025
This article covers failure investigation techniques, concurrency, common errors for the
credential types in the Azure Service Bus Java client library, and mitigation steps to
resolve these errors.
In addition to enabling logging, setting the log level to VERBOSE or DEBUG provides
insights into the library's state. The following sections show sample log4j2 and logback
configurations to reduce the excessive messages when verbose logging is enabled.
Configure Log4J 2
Use the following steps to configure Log4J 2:
1. Add the dependencies in your pom.xml using ones from the logging sample
pom.xml , in the "Dependencies required for Log4j2" section.
2. Add log4j2.xml to your src/main/resources folder.
Configure logback
Use the following steps to configure logback:
1. Add the dependencies in your pom.xml using ones from the logging sample
pom.xml , in the "Dependencies required for logback" section.
2. Add logback.xml to your src/main/resources folder.
To trace the AMQP transport frames, set the PN_TRACE_FRM=1 environment variable.
properties
handlers=java.util.logging.FileHandler
.level=OFF
proton.trace.level=ALL
java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.pattern=proton-trace.log
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=[%1$tF %1$tr] %3$s %4$s: %5$s %n
Reduce logging
One way to decrease logging is to change the verbosity. Another way is to add filters
that exclude logs from logger names packages like com.azure.messaging.servicebus or
com.azure.core.amqp . For examples, see the XML files in the Configure Log4J 2 and
When you submit a bug, the log messages from classes in the following packages are
interesting:
com.azure.core.amqp.implementation
com.azure.core.amqp.implementation.handler
com.azure.messaging.servicebus.implementation
Concurrency in ServiceBusProcessorClient
ServiceBusProcessorClient enables the application to configure how many calls to the
If the application observes fewer concurrent calls to the message handler than the
configured concurrency, it might be because the thread pool is not sized appropriately.
boundedElastic thread pool to invoke the message handler. The maximum number of
concurrent threads in this pool is limited by a cap. By default, this cap is ten times the
number of available CPU cores. For the ServiceBusProcessorClient to effectively
support the application's desired concurrency ( maxConcurrentCalls or
maxConcurrentSessions times maxConcurrentCalls ), you must have a boundedElastic
pool cap value that's higher than the desired concurrency. You can override the default
cap by setting the system property reactor.schedulers.defaultBoundedElasticSize .
You should tune the thread pool and CPU allocation on a case-by-case basis. However,
when you override the pool cap, as a starting point, limit the concurrent threads to
approximately 20-30 per CPU core. We recommend that you cap the desired
concurrency per ServiceBusProcessorClient instance to approximately 20-30. Profile
and measure your specific use case and tune the concurrency aspects accordingly. For
high load scenarios, consider running multiple ServiceBusProcessorClient instances
where each instance is built from a new ServiceBusClientBuilder instance. Also,
consider running each ServiceBusProcessorClient in a dedicated host - such as a
container or VM - so that downtime in one host doesn't impact the overall message
processing.
Keep in mind that setting a high value for the pool cap on a host with few CPU cores
would have adverse effects. Some signs of low CPU resources or a pool with too many
threads on fewer CPUs are: frequent timeouts, lock lost, deadlock, or lower throughput.
If you're running the Java application on a container, then we recommend using two or
more vCPU cores. We don't recommend selecting anything less than 1 vCPU core when
running Java application on containerized environments. For in-depth recommendations
on resourcing, see Containerize your Java applications.
Service Bus SDK uses the reactor-executor-* naming pattern for the connection I/O
thread. When the application experiences the shared connection bottleneck, then it
might be reflected in the I/O thread's CPU usage. Also, in the heap dump or in live
memory, the object ReactorDispatcher$workQueue is the work-queue of the I/O thread. A
long work-queue in the memory snapshot during the bottleneck period might indicate
that the shared I/O thread is overloaded with pending works.
Therefore, if the application load to a Service Bus endpoint is reasonably high in terms
of overall number of sent-received messages or payload size, you should use a separate
builder instance for each client that you build. For example, for each entity - queue or
topic - you can create a new ServiceBusClientBuilder and build a client from it. In case
of extremely high load to a specific entity, you might want to either create multiple
client instances for that entity or run clients in multiple hosts - for example, containers
or VMs - to load balance.
Application Gateway offers several security policies supporting different TLS protocol
versions. There are predefined policies enforcing TLSv1.2 as the minimum version, there
also exist old policies with TLSv1.0 as the minimum version. The HTTPS front-end will
have a TLS policy applied.
Right now, the Service Bus SDK doesn't recognize certain remote TCP terminations by
the Application Gateway front end, which uses TLSv1.0 as the minimum version. For
instance, if the front end sends TCP FIN, ACK packets to close the connection when its
properties are updated, the SDK can't detect it, so it won't reconnect, and clients can't
send or receive messages anymore. Such a halt only happens when using TLSv1.0 as the
minimum version. To mitigate, use a security policy with TLSv1.2 or higher as the
minimum version for the Application Gateway front-end.
The support for TLSv1.0 and 1.1 across all Azure Services is already announced to end
by 31 October 2024, so transitioning to TLSv1.2 is strongly recommended.
The Service Bus client supports running a background lock renew task that renews the
message lock continuously each time before it expires. By default, the lock renew task
runs for 5 minutes. You can adjust the lock renew duration by using
ServiceBusReceiverClientBuilder.maxAutoLockRenewDuration(Duration) . If you pass the
Duration.ZERO value, the lock renew task is disabled.
The following lists describes some of the usage patterns or host environments that can
lead to the lock lost error:
The lock renew task is disabled and the application's message processing time
exceeds the lock duration set at the resource level.
The application's message processing time exceeds the configured lock renew task
duration. Note that, if the lock renew duration is not set explicitly, it defaults to 5
minutes.
The application has turned on the Prefetch feature by setting the prefetch value to
a positive integer using
ServiceBusReceiverClientBuilder.prefetchCount(prefetch) . When the Prefetch
feature is enabled, the client will retrieve the number of messages equal to the
prefetch from the Service Bus entity - queue or topic - and store them in the in-
memory prefetch buffer. The messages stay in the prefetch buffer until they're
received into the application. The client doesn't extend the lock of the messages
while they're in the prefetch buffer. If the application processing takes so long that
message locks expire while staying in the prefetch buffer, then the application
might acquire the messages with an expired lock. For more information, see Why is
Prefetch not the default option?
The host environment has occasional network problems - for example, transient
network failure or outage - that prevent the lock renew task from renewing the
lock on time.
The host environment lacks enough CPUs or has shortages of CPU cycles
intermittently that delays the lock renew task from running on time.
The host system time isn't accurate - for example, the clock is skewed - delaying
the lock renew task and keeping it from running on time.
The connection I/O thread is overloaded, impacting its ability to execute lock
renew network calls on time. The following two scenarios can cause this issue:
The application is running too many receiver clients sharing the same
connection. For more information, see the Connection sharing bottleneck
section.
The application has configured ServiceBusReceiverClient.receiveMessages or
ServiceBusProcessorClient to have a large maxMessages or maxConcurrentCalls
The number of lock renew tasks in the client is equal to the maxMessages or
maxConcurrentCalls parameter values set for ServiceBusProcessorClient or
multiple network calls can also have an adverse effect in Service Bus namespace
throttling.
If the host is not sufficiently resourced, the lock can still be lost even if there are only a
few lock renew tasks running. If you're running the Java application on a container, then
we recommend using two or more vCPU cores. We don't recommend selecting anything
less than 1 vCPU core when running Java applications on containerized environments.
For in-depth recommendations on resourcing, see Containerize your Java applications.
The same remarks about locks are also relevant for a Service Bus queue or a topic
subscription that has session enabled. When the receiver client connects to a session in
the resource, the broker applies an initial lock to the session. To maintain the lock on the
session, the lock renew task in the client has to keep renewing the session lock before it
expires. For a session enabled resource, the underlying partitions sometimes move to
achieve load balancing across Service Bus nodes - for example, when new nodes are
added to share the load. When that happens, session locks can be lost. If the application
tries to complete or abandon a message after the session lock is lost, the API call fails
with the error com.azure.messaging.servicebus.ServiceBusException: The session lock
was lost. Request a new session receiver .
Next steps
If the troubleshooting guidance in this article doesn't help to resolve issues when you
use the Azure SDK for Java client libraries, we recommended that you file an issue in
the Azure SDK for Java GitHub repository .
Feedback
Was this page helpful? Yes No
This article describes dependency version conflicts and how to troubleshoot them.
Azure client libraries for Java depend on popular third-party libraries such as the
following ones:
Jackson
Netty
Reactor
SLF4J
Many Java applications and frameworks use these libraries directly or transitively, which
leads to version conflicts . Dependency managers such as Maven and Gradle
resolve all dependencies so that there's only a single version of each dependency on the
classpath. However, it's not guaranteed that the resolved dependency version is
compatible with all consumers of that dependency in your application. For more
information, see Introduction to the Dependency Mechanism in the Maven
documentation and Understanding dependency resolution in the Gradle
documentation.
For more information on conflict resolution in such environments, see the Create a fat
JAR section later in this article.
7 Note
If you use earlier versions of Spark, or if another library you use requires an even earlier
version of Jackson that the Azure SDK for Java doesn't support, continue reading this
article for possible mitigation steps.
If you see LinkageError (or any of its subclasses) related to the Jackson API, check the
message of the exception for runtime version information. For example:
com.azure.core.implementation.jackson.JacksonVersionMismatchError:
com/fasterxml/jackson/databind/cfg/MapperBuilder Package versions: jackson-
Look for warning and error logs from JacksonVersion . For more information, see
Configure logging in the Azure SDK for Java. For example: [main] ERROR
com.azure.core.implementation.jackson.JacksonVersion - Version '2.9.0' of package
7 Note
Check that all of the Jackson packages have the same version.
For the list of packages used by Azure SDK and the supported Jackson versions, see the
Support for multiple Jackson versions section.
The dependencies listed in the Azure SDK BOM are tested rigorously to avoid
dependency conflicts.
Avoid downgrading the Azure SDK version because it may expose your application to
known vulnerabilities and issues.
Shade libraries
Sometimes there's no combination of libraries that work together, and shading comes
as the last resort.
7 Note
Shading has significant drawbacks: it increases package size and number of classes
on the classpath, it makes code navigation and debugging hard, doesn't relocate
JNI code, breaks reflection, and may violate code licenses among other things. It
should be used only after other options are exhausted.
Shading enables you to include dependencies within a JAR at build time, then rename
packages and update application code to use the code in the shaded location. Diamond
dependency conflict is no longer an issue because there are two different copies of a
dependency. You may shade a library that has a conflicting transitive dependency or a
direct application dependency, as described in the following list:
Transitive dependency conflict: For example, third-party library A requires Jackson
2.9, which Azure SDKs don't support, and it's not possible to update A . Create a
new module, which includes A and shades (relocates) Jackson 2.9 and, optionally,
other dependencies of A .
Application dependency conflict: Your application uses Jackson 2.9 directly. While
you're working on updating your code, you can shade and relocate Jackson 2.9
into a new module with relocated Jackson classes instead.
7 Note
Creating fat JAR with relocated Jackson classes doesn't resolve a version conflict in
these examples - it only forces a single shaded version of Jackson.
ノ Expand table
Jackson 2.10.0 and newer minor versions are compatible. For more information, see
the Support for multiple Jackson versions section.
SLF4J 1.7.*
netty-tcnative- 2.0.*
boringssl-static
Dependency Supported versions
netty-common 4.1.*
reactor-core 3.X.* - Major and minor version numbers must exactly match the ones your
azure-core version depends on. For more information, see the Project
Reactor policy on deprecations .
7 Note
Using old versions of Jackson may expose applications to known vulnerabilities and
issues. For more information, see the list of known vulnerabilities for Jackson
libraries .
When pinning a specific version of Jackson, make sure to do it for all modules used by
Azure SDK, which are shown in the following list:
jackson-annotations
jackson-core
jackson-databind
jackson-dataformat-xml
jackson-datatype-jsr310
Environments like Apache Spark, Apache Flink, and Databricks might bring older
versions of azure-core that don't yet depend on azure-json . As a result, when using
newer versions of Azure libraries in such environments, you might get errors similar to
java.lang.NoClassDefFoundError: com/azure/json/JsonSerializable . You can mitigate
Next steps
Now that you're familiar with dependency version conflicts and how to troubleshoot
them, see Dependency Management for Java for information on the best way to
prevent them.
Feedback
Was this page helpful? Yes No
This article describes a few tools that can diagnose networking issues of various
complexities. These issues include scenarios that range from troubleshooting an
unexpected response value from a service, to root-causing a connection-closed
exception.
For client-side troubleshooting, the Azure client libraries for Java offer a consistent and
robust logging story, as described in Configure logging in the Azure SDK for Java.
However, the client libraries make network calls over various protocols, which may lead
to troubleshooting scenarios that extend outside of the troubleshooting scope
provided. When these problems occur, the solution is to use the external tooling
described in this article to diagnose networking issues.
Fiddler
Fiddler is an HTTP debugging proxy that allows for requests and responses passed
through it to be logged as-is. The raw requests and responses that you capture can help
you troubleshoot scenarios where the service gets an unexpected request, or the client
receives an unexpected response. To use Fiddler, you need to configure the client library
with an HTTP proxy. If you use HTTPS, you need extra configuration to inspect the
decrypted request and response bodies.
The following steps show you how to use the Java Runtime Environment (JRE) to trust
the certificate. If the certificate isn't trusted, an HTTPS request through Fiddler may fail
with security warnings.
Windows
PowerShell
PowerShell
Wireshark
Wireshark is a network protocol analyzer that can capture network traffic without
needing changes to application code. Wireshark is highly configurable and can capture
broad to specific, low-level network traffic. This capability is useful for troubleshooting
scenarios such as a remote host closing a connection or having connections closed
during an operation. The Wireshark GUI displays captures using a color scheme that
identifies unique capture cases, such as a TCP retransmission, RST, and so on. You can
also filter captures either at capture time or during analysis.
Configure a capture filter
Capture filters reduce the number of network calls that are captured for analysis.
Without capture filters, Wireshark captures all traffic that goes through a network
interface. This behavior can produce massive amounts of data where most of it may be
noise to the investigation. Using a capture filter helps preemptively scope the network
traffic being captured to help target an investigation. For more information, see
Capturing Live Network Data in the Wireshark documentation.
The following example adds a capture filter to capture network traffic sent to or received
from a specific host.
In Wireshark, navigate to Capture > Capture Filters... and add a new filter with the value
host <host-IP-or-hostname> . This filter captures traffic only to and from that host. If the
application communicates to multiple hosts, you can add multiple capture filters, or you
can add the host IP/hostname with the 'OR' operator to provide looser capture filtering.
Capture to disk
You might need to run an application for a long time to reproduce an unexpected
networking exception, and to see the traffic that leads up to it. Additionally, it may not
be possible to maintain all captures in memory. Fortunately, Wireshark can log captures
to disk so that they're available for post-processing. This approach avoids the risk of
running out of memory while you reproduce an issue. For more information, see File
Input, Output, And Printing in the Wireshark documentation.
The following example sets up Wireshark to persist captures to disk with multiple files,
where the files split on either 100k captures or 50 MB size.
In Wireshark, navigate to Capture > Options and find the Output tab, then enter a file
name to use. This configuration causes Wireshark to persist captures to a single file.
To enable capture to multiple files, select Create a new file automatically and then
select after 100000 packets and after 50 megabytes. This configuration has Wireshark
create a new file when one of the predicates is matched. Each new file uses the same
base name as the file name entered and appends a unique identifier.
If you want to limit the number of files that Wireshark can create, select Use a ring
buffer with X files. This option limits Wireshark to logging with only the specified
number of files. When that number of files is reached, Wireshark begins overwriting the
files, starting with the oldest.
Filter captures
Sometimes you can't tightly scope the traffic that Wireshark captures - for example, if
your application communicates with multiple hosts using various protocols. In this
scenario, generally with using persistent capture outlined previously, it's easier to run
analysis after network capturing. Wireshark supports filter-like syntax for analyzing
captures. For more information, see Working With Captured Packets in the Wireshark
documentation.
The following example loads a persisted capture file and filters on ip.src_host==<IP> .
In Wireshark, navigate to File > Open and load a persisted capture from the file location
used previously. After the file has loaded underneath the menu bar, a filter input
appears. In the filter input, enter ip.src_host==<IP> . This filter limits the capture view so
that it shows only captures where the source was from the host with the IP <IP> .
Next steps
This article covered using various tools to diagnose networking issues when working
with the Azure SDK for Java. Now that you're familiar with the high-level usage
scenarios, you can begin exploring the SDK itself. For more information on the APIs
available, see the Azure SDK for Java libraries.
Feedback
Was this page helpful? Yes No
The following table links to Java source you can use to create and configure Azure virtual
machines.
ノ Expand table
Sample Description
Create a virtual machine from a Create a custom virtual machine image and use it to create new virtual
custom image machines.
Create a virtual machine using Create snapshot from the virtual machine's OS and data disks, create
specialized VHD from a managed disks from the snapshots, and then create a virtual machine
snapshot by attaching the managed disks.
Create virtual machines in Create virtual machines in the same region on the same virtual network
parallel in the same network with two subnets in parallel.
Azure management libraries for Java - Web
app samples
06/02/2025
The following table links to Java source you can use to create and configure web apps.
ノ Expand table
Sample Description
Create an app
Create a web app and deploy from Deploy web apps from local Git, FTP, and continuous integration
FTP or GitHub from GitHub.
Create a web app and manage Create a web app and deploy to staging slots, and then swap
deployment slots deployments between slots.
Configure an app
Create a web app and configure a Create a web app with a custom domain and self-signed SSL
custom domain certificate.
Scale an app
Scale a web app with high Scale a web app in three different geographical regions and make
availability across multiple regions them available through a single endpoint using Azure Traffic
Manager.
Connect a web app to a storage Create an Azure storage account and add the storage account
account connection string to the app settings.
Connect a web app to a SQL Create a web app and SQL database, and then add the SQL
database database connection string to the app settings.
Azure management libraries for Java - SQL
Database samples
06/02/2025
The following table links to Java source you can use to manage SQL Database.
ノ Expand table
Sample Description
Connect and query data from Configure a sample database, then run select, insert, update, and delete
Azure SQL Database using commands.
JDBC
Create and manage SQL Create SQL databases, set performance levels, and configure firewalls.
databases
Manage SQL databases Create a master SQL database and read-only databases from the master
across multiple regions in multiple regions. Connect VMs to their nearest SQL database instance
with a virtual network and firewall rules.
Java source samples for Microsoft Entra ID
06/02/2025
The following table links to Java source you can use to access and work with Microsoft Entra ID
in your apps.
ノ Expand table
Sample Description
Integrating Microsoft Entra ID into a Java Set up OAuth2 authentication in a Java web app.
web application
Integrating Microsoft Entra ID into a Java Obtain a JWT access token through OAuth 2.0, then use the
command line using username and access token to authenticate with a Microsoft Entra
password protected web API.
Manage users, groups, and roles with the Manage users, groups, roles, and service principals with the
Graph API Graph API using the management API.
Manage service principals Create a service principal for an application, assign it a role,
and use the service principal to access resources in the
subscription
Java samples for Azure Container Service
06/02/2025
The following table links to Java source you can use to create and configure applications
running in Azure Container Service.
ノ Expand table
Sample Description
Manage Azure Container Registries Create a new Azure Container registry and add an new
image.
Manage Azure Container Service Create an Azure Container Service with Kubernetes
orchestration.
Deploy an image from Azure Container Deploy a Docker image running Tomcat to a new web
Registry into a new Linux Web App app running in Azure App Service for Linux.
Azure SDK for Java libraries
05/21/2025
The Azure SDK for Java is composed of many libraries - one for each Azure service.
Additionally, there are libraries designed to integrate with open-source projects, such as
Spring, to provide Azure-specific support. All Azure SDK for Java libraries are designed to work
together and provide a cohesive developer experience, as described in Use the Azure SDK for
Java. The SDK achieves this cohesion through a common set of concepts, including HTTP
pipelines, identity and authentication, and logging.
The following tables show all libraries that exist in the Azure SDK for Java. These tables provide
links to all relevant repositories and documentation. To keep up to date with the Azure SDKs,
consider following the @AzureSDK Twitter account and the Azure SDK blog .
All libraries
JDBC Authentication Plugin for MySQL Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
JDBC Authentication Plugin for PostgreSQL Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Azure Blob Storage Checkpoint Store Maven 1.20.7 docs GitHub 1.20.7
Maven 1.21.0- GitHub 1.21.0-
beta.1 beta.1
CloudNative CloudEvents with Event Grid Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Core - Client - Core Serializer Apache Avro Maven 1.0.0- docs GitHub 1.0.0-
beta.56 beta.56
Core - Client - Core Serializer Apache Jackson Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Core - Client - Core Serializer GSON JSON Maven 1.3.8 docs GitHub 1.3.8
Core - Client - Core Serializer Jackson JSON Maven 1.5.8 docs GitHub 1.5.8
Core - Plugin - Tracing OpenTelemetry Plugin Maven 1.0.0- docs GitHub 1.0.0-
beta.56 beta.56
beta.2 beta.2
OLTP Spark 3.1 Connector for Azure Cosmos DB SQL Maven 4.37.1 docs GitHub 4.37.1
API
OLTP Spark 3.2 Connector for Azure Cosmos DB SQL Maven 4.37.1 docs GitHub 4.37.1
API
OLTP Spark 3.3 Connector for Azure Cosmos DB SQL Maven 4.37.2 GitHub 4.37.2
API
beta.1 beta.1
Resource Management - App Compliance Automation Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Arize AI Observability Eval Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Resource Management - Azure Stack HCI Maven 1.0.0- docs GitHub 1.0.0-
beta.5 beta.5
Resource Management - Azure VMware Solution Maven 1.2.0 docs GitHub 1.2.0
Resource Management - Container Service Fleet Maven 1.2.0 docs GitHub 1.2.0
Resource Management - Content Delivery Network Maven 2.51.0 docs GitHub 2.51.0
Resource Management - Cosmos DB for PostgreSQL Maven 1.0.0 docs GitHub 1.0.0
Maven 1.1.0- GitHub 1.1.0-
beta.2 beta.2
Resource Management - Data Box Edge Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Data Lake Analytics Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Data Lake Store Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Device Provisioning Services Maven 1.1.0 docs GitHub 1.1.0
Resource Management - Hardware Security Module Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Health Data AI Services Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Hybrid Container Service Maven 1.1.0 docs GitHub 1.1.0
Resource Management - IoT Firmware Defense Maven 1.1.0 docs GitHub 1.1.0
Resource Management - Machine Learning Services Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Resource Management - Managed Network Fabric Maven 1.1.0 docs GitHub 1.1.0
Resource Management - Managed Service Identity Maven 2.51.0 docs GitHub 2.51.0
Resource Management - Migration Discovery SAP Maven 1.0.0- docs GitHub 1.0.0-
beta.2 beta.2
Resource Management - MySQL Flexible Server Maven 1.0.0 docs GitHub 1.0.0
Name Package Docs Source
Resource Management - New Relic Observability Maven 1.2.0 docs GitHub 1.2.0
Resource Management - Open Energy Platform Maven 1.0.0- docs GitHub 1.0.0-
beta.2 beta.2
Resource Management - Palo Alto Networks - Next Maven 1.2.0 docs GitHub 1.2.0
Generation Firewall
Resource Management - PostgreSQL Flexible Server Maven 1.1.0 docs GitHub 1.1.0
Maven 1.2.0- GitHub 1.2.0-
beta.1 beta.1
Resource Management - Recovery Services Backup Maven 1.6.0 docs GitHub 1.6.0
Resource Management - Recovery Services Data Maven 1.0.0- docs GitHub 1.0.0-
Replication beta.2 beta.2
Resource Management - Recovery Services Site Maven 1.3.0 docs GitHub 1.3.0
Recovery
Resource Management - Red Hat OpenShift Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Resource Management - Service Fabric Managed Maven 1.0.0 docs GitHub 1.0.0
Clusters
Resource Management - Spring App Discovery Maven 1.0.0- docs GitHub 1.0.0-
beta.2 beta.2
Resource Management - SQL Virtual Machine Maven 1.0.0- docs GitHub 1.0.0-
beta.5 beta.5
Resource Management - System Center Virtual Maven 1.0.0 docs GitHub 1.0.0
Machine Manager
Resource Management - Time Series Insights Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Weights & Biases Maven 1.0.0- docs GitHub 1.0.0-
beta.1 beta.1
Resource Management - Workloads SAP Virtual Maven 1.0.0 docs GitHub 1.0.0
Instance
Azure Spring Boot Starter Active Directory Maven 4.0.0 GitHub 4.0.0
Azure Spring Boot Starter Active Directory B2C Maven 4.0.0 GitHub 4.0.0
Azure Spring Boot Starter Key Vault Certificates Maven 3.14.0 GitHub 3.14.0
Azure Spring Boot Starter Key Vault Secrets Maven 4.0.0 GitHub 4.0.0
Azure Spring Boot Starter Service bus Jms Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Appconfiguration Config Web Maven 2.11.0 docs GitHub 2.11.0
Azure Spring Cloud Feature Management Maven 2.10.0 docs GitHub 2.10.0
Azure Spring Cloud Feature Management Web Maven 2.10.0 docs GitHub 2.10.0
Azure Spring Cloud Integration Event Hubs Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Integration Service Bus Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Integration Storage Queue Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Starter Appconfiguration Config Maven 2.11.0 docs GitHub 2.11.0
Azure Spring Cloud Starter Event Hubs Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Starter Event Hubs Kafka Maven 2.14.0 GitHub 2.14.0
Azure Spring Cloud Starter Service bus Maven 4.0.0 GitHub 4.0.0
Name Package Docs Source
Azure Spring Cloud Starter Storage Queue Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Event Hubs Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Service bus Core Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Service bus Queue Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Service bus Topic Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Test Maven 2.14.0 GitHub 2.14.0
Spring Cloud Azure Appconfiguration Config Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.1 beta.1
Spring Cloud Azure Appconfiguration Config Web Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.1 beta.1
Spring Cloud Azure Feature Management Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.1 beta.1
Spring Cloud Azure Feature Management Web Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.1 beta.1
Spring Cloud Azure Starter Active Directory Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Active Directory B2C Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter App Configuration Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Data Cosmos DB Maven 5.22.0 GitHub 5.22.0
Spring Cloud Azure Starter Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Name Package Docs Source
Spring Cloud Azure Starter Integration Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Integration Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Integration Storage Queue Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Key Vault Certificates Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Key Vault Secrets Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Service Bus JMS Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Storage Blob Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Storage File Share Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Storage Queue Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Stream Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Starter Stream Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
Name Package Docs Source
beta.4 beta.4
Spring Cloud Azure Stream Binder Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Stream Binder Event Hubs Core Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Stream Binder Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
Spring Cloud Azure Stream Binder Service Bus Core Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0- GitHub 6.0.0-
beta.4 beta.4
beta.4 beta.4
All libraries
ノ Expand table
JDBC Authentication Plugin for MySQL Maven 1.0.0-beta.1 docs GitHub 1.0.0-
beta.1
Name Package Docs Source
JDBC Authentication Plugin for PostgreSQL Maven 1.0.0-beta.1 docs GitHub 1.0.0-
beta.1
Azure Blob Storage Checkpoint Store Maven 1.20.7 docs GitHub 1.20.7
Maven 1.21.0-beta.1 GitHub 1.21.0-
beta.1
Core - Client - Core Serializer Apache Avro Maven 1.0.0-beta.56 docs GitHub 1.0.0-
beta.56
Core - Client - Core Serializer GSON JSON Maven 1.3.8 docs GitHub 1.3.8
Core - Client - Core Serializer Jackson JSON Maven 1.5.8 docs GitHub 1.5.8
Core - Plugin - Tracing OpenTelemetry Plugin Maven 1.0.0-beta.56 docs GitHub 1.0.0-
beta.56
beta.1
beta.1
OLTP Spark 3.1 Connector for Azure Cosmos DB Maven 4.37.1 docs GitHub 4.37.1
SQL API
OLTP Spark 3.2 Connector for Azure Cosmos DB Maven 4.37.1 docs GitHub 4.37.1
SQL API
OLTP Spark 3.3 Connector for Azure Cosmos DB Maven 4.37.2 GitHub 4.37.2
SQL API
Resource Management - Azure Stack HCI Maven 1.0.0-beta.5 docs GitHub 1.0.0-
beta.5
Resource Management - Azure VMware Solution Maven 1.2.0 docs GitHub 1.2.0
Name Package Docs Source
Resource Management - Container Service Fleet Maven 1.2.0 docs GitHub 1.2.0
Resource Management - Data Box Edge Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Data Lake Analytics Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Data Lake Store Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Health Data AI Services Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Hybrid Container Service Maven 1.1.0 docs GitHub 1.1.0
Resource Management - IoT Firmware Defense Maven 1.1.0 docs GitHub 1.1.0
beta.2
Resource Management - Managed Network Fabric Maven 1.1.0 docs GitHub 1.1.0
Resource Management - Managed Service Identity Maven 2.51.0 docs GitHub 2.51.0
Resource Management - Migration Discovery SAP Maven 1.0.0-beta.2 docs GitHub 1.0.0-
beta.2
Resource Management - MySQL Flexible Server Maven 1.0.0 docs GitHub 1.0.0
Resource Management - New Relic Observability Maven 1.2.0 docs GitHub 1.2.0
Resource Management - Open Energy Platform Maven 1.0.0-beta.2 docs GitHub 1.0.0-
beta.2
Resource Management - Palo Alto Networks - Maven 1.2.0 docs GitHub 1.2.0
Next Generation Firewall
Resource Management - Recovery Services Backup Maven 1.6.0 docs GitHub 1.6.0
Resource Management - Recovery Services Data Maven 1.0.0-beta.2 docs GitHub 1.0.0-
Replication beta.2
Resource Management - Recovery Services Site Maven 1.3.0 docs GitHub 1.3.0
Recovery
Resource Management - Red Hat OpenShift Maven 1.0.0-beta.1 docs GitHub 1.0.0-
beta.1
Resource Management - Service Fabric Managed Maven 1.0.0 docs GitHub 1.0.0
Clusters
Resource Management - Spring App Discovery Maven 1.0.0-beta.2 docs GitHub 1.0.0-
beta.2
Resource Management - SQL Virtual Machine Maven 1.0.0-beta.5 docs GitHub 1.0.0-
beta.5
Resource Management - System Center Virtual Maven 1.0.0 docs GitHub 1.0.0
Machine Manager
Resource Management - Time Series Insights Maven 1.0.0 docs GitHub 1.0.0
Resource Management - Weights & Biases Maven 1.0.0-beta.1 docs GitHub 1.0.0-
beta.1
Resource Management - Workloads SAP Virtual Maven 1.0.0 docs GitHub 1.0.0
Instance
Azure Spring Boot Starter Active Directory Maven 4.0.0 GitHub 4.0.0
Azure Spring Boot Starter Active Directory B2C Maven 4.0.0 GitHub 4.0.0
Azure Spring Boot Starter Key Vault Certificates Maven 3.14.0 GitHub 3.14.0
Azure Spring Boot Starter Key Vault Secrets Maven 4.0.0 GitHub 4.0.0
Azure Spring Boot Starter Service bus Jms Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Appconfiguration Config Web Maven 2.11.0 docs GitHub 2.11.0
Azure Spring Cloud Feature Management Maven 2.10.0 docs GitHub 2.10.0
Azure Spring Cloud Feature Management Web Maven 2.10.0 docs GitHub 2.10.0
Azure Spring Cloud Integration Event Hubs Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Integration Service Bus Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Integration Storage Queue Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Starter Appconfiguration Maven 2.11.0 docs GitHub 2.11.0
Config
Azure Spring Cloud Starter Event Hubs Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Starter Event Hubs Kafka Maven 2.14.0 GitHub 2.14.0
Azure Spring Cloud Starter Service bus Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Starter Storage Queue Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Event Hubs Maven 4.0.0 GitHub 4.0.0
Azure Spring Cloud Stream Binder Service bus Maven 4.0.0 GitHub 4.0.0
Core
Azure Spring Cloud Stream Binder Service bus Maven 4.0.0 GitHub 4.0.0
Queue
Azure Spring Cloud Stream Binder Service bus Maven 4.0.0 GitHub 4.0.0
Topic
Name Package Docs Source
Azure Spring Cloud Stream Binder Test Maven 2.14.0 GitHub 2.14.0
Spring Cloud Azure Appconfiguration Config Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0-beta.1 GitHub 6.0.0-
beta.1
Spring Cloud Azure Appconfiguration Config Web Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0-beta.1 GitHub 6.0.0-
beta.1
Spring Cloud Azure Feature Management Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0-beta.1 GitHub 6.0.0-
beta.1
Spring Cloud Azure Feature Management Web Maven 5.22.0 docs GitHub 5.22.0
Maven 6.0.0-beta.1 GitHub 6.0.0-
beta.1
Spring Cloud Azure Starter Active Directory Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Active Directory B2C Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter App Configuration Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Data Cosmos DB Maven 5.22.0 GitHub 5.22.0
Spring Cloud Azure Starter Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Integration Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Integration Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Integration Storage Maven 5.22.0 GitHub 5.22.0
Queue Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Name Package Docs Source
Spring Cloud Azure Starter Key Vault Certificates Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Key Vault Secrets Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Service Bus JMS Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Storage Blob Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Storage File Share Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Storage Queue Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Stream Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Starter Stream Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Stream Binder Event Hubs Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Stream Binder Event Hubs Maven 5.22.0 GitHub 5.22.0
Core Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Name Package Docs Source
Spring Cloud Azure Stream Binder Service Bus Maven 5.22.0 GitHub 5.22.0
Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
Spring Cloud Azure Stream Binder Service Bus Maven 5.22.0 GitHub 5.22.0
Core Maven 6.0.0-beta.4 GitHub 6.0.0-
beta.4
HEYA-TEST
Services