Contexts & Dependency Injection for Java

Build Compatible Extensions Proposal

Posted by Ladislav Thon on Sep 15, 2020 | Comments

A few months back we shared our vision for CDI lite. In short, the goal with CDI lite is to make the spec lighter, cloud-friendlier and allow for build-time implementations which are now on the rise.

When we started thinking about a “lite” variant of CDI, amenable to build-time processing, we understood that an alternative to Portable Extensions is necessary. Subsequent discussion on the MicroProfile mailing list confirmed that.

We explored several variants of how the API might look like. This blog post shows one that, we believe, has the most potential. We call it Build Compatible Extensions, but the name, as everything else described here, is up for debate.

For our explorations of the API design space, we mostly constrained ourselves to one particularly common use case: annotation transformations. Many things that Portable Extensions allow can be achieved simply by adding an annotation here or removing an annotation there. We know that other transformations are necessary, but the API for annotation transformation is the most developed proposal we have. The other ones are significantly more rough, so bear with us please and submit feedback! You can do that in form of a GitHub issue against the CDI repository and add the lite-extension-api label to your issue.

Before we start, we’d also like to note that in this text, we assume the reader (that is, you!) is familiar with CDI and preferably also Portable Extensions, as we expect some knowledge and don’t explain everything. We also make references to Portable Extensions on several occasions.


In our proposal, Build Compatible Extensions are simply methods annotated with an extension annotation. Extension annotations correspond to phases in which extensions are processed. There are 4 phases:

  1. @Discovery: adding classes to application, registering custom contexts

  2. @Enhancement: transforming annotations

  3. @Synthesis: registering synthetic beans and observers

  4. @Validation: performing custom validation

There are some constraints we put on these methods (such as: they must be public), but they should be pretty obvious and shouldn’t be limiting anyone. The container automatically finds extensions and invokes them when time is right. When exactly are the extensions invoked can’t be defined in too much detail, because we want implementations to be able to invoke them at build time (e.g. during application compilation) or runtime (e.g. during application deployment). It certainly happens before the application starts. Extensions in earlier phases are guaranteed to run before extensions in later phases.

Extensions can declare an arbitrary number of parameters, all of which are supplied by the container. There’s a set of predefined types of parameters for each phase, and all implementations would have to support that. We’re also thinking of designing an SPI that would allow anyone to contribute support for other types of parameters. A class can declare multiple extension methods, in which case they are all invoked on a single instance of the class. If you need to control order of extension invocations, there’s an annotation @ExtensionPriority just for that. This is a lot of text already, so let’s take a look at an example:

public class MyExtension {
    public void doSomething() {
        System.out.println("This is an extension, yay!");

This doesn’t really do anything, just prints a message whenever the extension is executed. Let’s create something more interesting. Say, moving a qualifier annotation from one class to another. Let’s assume that we have these classes in our application.

A qualifier annotation:

@interface MyQualifier {

A service interface:

interface MyService {
    String hello();

Two implementations, one with qualifier and the other unqualified:

class MyFooService implements MyService {
	    public String hello() {
	    	    return "foo";

class MyBarService implements MyService {
	    public String hello() {
	    	    return "bar";

A class that uses the service

class MyServiceUser {
	    MyService myService;

Here, it’s pretty clear that when the CDI container instantiates the MyServiceUser class, it will inject a MyFooService into the myService field. With a simple Build Compatible Extension, we can “transfer” the qualifier annotation from MyFooService to MyBarService:

class MyExtension {
    public void configure(ClassConfig<MyFooService> foo,
                          ClassConfig<MyBarService> bar) {
        foo.removeAnnotation(it ->

I’m sure you understand the extension pretty well already: with this extension present in the application, the CDI container will consider the MyFooService not annotated @MyQualifier, and at the same time, it will consider the MyBarService annotated @MyQualifier. In the end, MyServiceUser.myService will no longer hold a MyFooService; it will hold a MyBarService instead. We have successfully “moved” an annotation from one class to another, thereby altering how the CDI container behaves.

This is a very simple example, but using the exact same API, one can achieve many things. For example, if the CDI container doesn’t treat all classes as beans (in CDI Lite, this isn’t required), all it takes to create a bean out of a class is just adding a bean defining annotation: myClass.addAnnotation(Singleton.class); To “veto” a class, again, just add an annotation: myClass.addAnnotation(Vetoed.class); Etc. etc. etc.

Extension parameters

By now, you should have a general idea of how extensions look like. If you want to know the gory details, read on – but be warned, this is going to be long. You might want to skip directly to the conclusion at the end. Still here? Good! As we said above, an extension can declare an arbitrary number of parameters. The parameters are where extensions become interesting, so let’s describe in detail which parameters can extensions declare.


Just two parameters are possible: AppArchiveBuilder to register custom classes so that the CDI container treats them as part of the application, and Contexts to register custom contexts.


As mentioned above, we have focused mostly on this phase. Therefore, we have a pretty elaborate API which allows inspecting and modifying the application’s annotations.

Inspecting code

You can look at all the classes, methods and fields in an application, and make decisions based on your findings. For that, an extension can declare parameters of these types:

  • ClassInfo<MyService>: to look at one particular class

  • Collection<ClassInfo<? extends MyService>>: to look at all subclasses

  • Collection<ClassInfo<? super MyService>>: to look at all superclasses

  • Collection<ClassInfo<?>>: to look at all classes

  • Collection<MethodInfo<MyService>>: to look at all methods declared on one class

  • Collection<MethodInfo<? extends MyService>>: to look at all methods declared on all subclasses

  • Collection<MethodInfo<? super MyService>>: to look at all methods declared on all superclasses

  • Collection<MethodInfo<?>>: to look at all methods declared on all classes

  • Collection<FieldInfo<MyService>>: to look at all fields declared on one class

  • Collection<FieldInfo<? extends MyService>>: to look at all fields declared on all subclasses

  • Collection<FieldInfo<? super MyService>>: to look at all fields declared on all superclasses

  • Collection<FieldInfo<?>>: to look at all fields declared on all classes

Such parameters can also be annotated @WithAnnotations, in which case, only those classes/methods/fields annotated with given annotations will be provided. The ClassInfo, MethodInfo and FieldInfo types give you visibility into all interesting details about given declarations. You can drill down to method parameters, their types, annotations, and so on.

The new metamodel

Actually, let’s take a small detour to explain these ClassInfo, MethodInfo and FieldInfo types, because they totally deserve it. You will note that they are actually very similar to the Java Reflection API. However, they do not rely on the Reflection API in any way, unlike the types in Portable Extensions. This is an important goal of the entire CDI Lite effort: make it possible to implement CDI completely at build time. To that end, we designed a completely new metamodel for Java classes, which can be implemented solely on top of Java bytecode. The type hierarchy looks like this: at the top, there’s an AnnotationTarget. That’s basically anything that can be annotated. In Java, this means declarations, such as classes or methods, and types, such as a type of a method parameter. The AnnotationTarget lets you look at its annotations using these 4 methods:

boolean hasAnnotation(Class<? extends Annotation> annotationType);
AnnotationInfo annotation(Class<? extends Annotation> annotationType);
Collection<AnnotationInfo> repeatableAnnotation(
        Class<? extends Annotation> annotationType);
Collection<AnnotationInfo> annotations();

The method hasAnnotation(...) returns whether a given annotation target (such as a class) has an annotation of given type. The annotation(...) method returns information about an annotation of a given type present on a given target (we’ll see more about AnnotationInfo soon). The repeatableAnnotation(...) method returns all annotations of a given repeatable annotation type, and finally the annotations() method returns all annotations present on a given target. Let’s stop for a short example. Let’s say we have a ClassInfo for the MyServiceUser class, which we’ve seen in the previous example. We can do all kinds of interesting things with it, but here, let’s just check if the class has a @Singleton annotation, and if so, print all annotations on all fields annotated @Inject:

ClassInfo<MyServiceUser> clazz = ...;
if (clazz.hasAnnotation(Singleton.class)) { // we know this is true
    for (FieldInfo<MyServiceUser> field : clazz.fields()) {
        if (field.hasAnnotation(Inject.class)) {

You might have noticed that the ClassInfo, MethodInfo and FieldInfo types have a type parameter. This is only useful when declaring an extension parameter – there, it expresses a query (such as: give me all fields declared on all subclasses of MyService). In all other cases, it can be pretty much ignored. Short tour through the AnnotationInfo type: you can access the target() of the annotation, as well as the annotation declaration(), and you can see the annotation attributes using the hasAttribute(String) and attribute(String) methods. Given that an attribute named value is particularly common, there’s also hasValue() and value(). And finally, there’s attributes() to access all annotation attributes at once. Annotation attributes are represented by the AnnotationAttribute interface, which has a name() and a value(). The attribute value is represented by AnnotationAttributeValue, which allows figuring out the actual type of the value, as well as obtaining its representation as an ordinary Java type. As mentioned above, there are two kinds of AnnotationTarget`s: declarations and types. Therefore, we have `DeclarationInfo as the top-level type for representing Java declarations, and Type as the top-level type for representing Java types. To distinguish between them, the AnnotationTarget interface has 4 methods:

boolean isDeclaration();
boolean isType();
DeclarationInfo asDeclaration();
Type asType();

The boolean-returning methods return whether a given annotation target is a declaration or a type, and the remaining two methods cast to the corresponding type (or throw an exception). You can find similar methods on DeclarationInfo and Type, for various kinds of declarations and types (for example, DeclarationInfo has isClass(), asClass() and others). We represent 4 kinds of Java declarations in the new metamodel: classes, methods (including constructors), method parameters, and fields. We’re thinking about if it’s worth adding a representation for packages, given that they can also be annotated (using Any opinion here is welcome! Classes are represented by ClassInfo, which gives access to the name(), superClass(), all implemented superInterfaces(), all typeParameters(), and most importantly, all constructors(), methods() and fields(). Constructors and methods are represented by MethodInfo, which gives access to the name(), parameters(), returnType() and also typeParameters(). Method parameters are represented by ParameterInfo, which gives access to the name(), if it’s present (remember that parameter names don’t have to be present in bytecode!), and the type(). Finally, fields are represented by FieldInfo, which gives access to name() and type(). As you’ve surely noticed, we can often get hold of a type of something (method return type, field type, etc.). That’s a second kind of AnnotationTarget. As we’ve mentioned, the top-level representation of types is the Type interface, and there are 7 kinds of types: VoidType, PrimitiveType, ClassType, ArrayType, ParameterizedType, TypeVariable and WildcardType. We won’t go into details about these, as the text is already getting rather long. Instead, let’s get back to extension parameters!

Modifying code

Not only can you look at classes, methods and fields in your extension, you can also modify them. These modifications include adding and removing annotations, and are only considered by the CDI container. That is, the rest of the application will not see these modifications! For each parameter type mentioned above, such as ClassInfo<MyService> or Collection<MethodInfo<? extends MyService>>, you can also declare a parameter of the corresponding *Config type: ClassConfig<MyService>, Collection<MethodConfig<? extends MyService>> etc. Again you can use @WithAnnotations to narrow down the set of provided objects. Also, ClassConfig is actually a subtype of ClassInfo, so if you need to check a class before you configure it, having a ClassConfig is enough. MethodConfig and FieldConfig are similar. The annotation configuration methods provided by these types are:

void addAnnotation(Class<? extends Annotation> clazz,
                   AnnotationAttribute... attributes);
void addAnnotation(ClassInfo<?> clazz,
                   AnnotationAttribute... attributes);
void addAnnotation(AnnotationInfo annotation);
void addAnnotation(Annotation annotation);
void removeAnnotation(Predicate<AnnotationInfo> predicate);
void removeAllAnnotations();

While technically, we could do with just 2 methods, one for adding and one for removing annotations, we decided to have 6 of them to give extension implementations more flexibility. For example, you can use `AnnotationLiteral`s when adding an annotation, similarly to Portable Extensions, but you don’t have to.

Other types

While it’s possible to declare a parameter of type Collection<ClassInfo<?>>, it’s very likely that you don’t want to do this. It’s a sign that you need to do a more elaborate processing, for which the simple declarative API is not powerful enough. Luckily, we have an imperative entrypoint as well: AppArchive. With this, you can programmatically construct queries to find classes, methods and fields. If you also want to configure the classes, methods or fields, you can use AppArchiveConfig, which extends AppArchive. For example:

public class MyExtension {
    public void configure(AppArchiveConfig app) {
            .filter(it -> !it.hasAnnotation(MyAnnotation.class))
            .forEach(it -> it.addAnnotation(MyAnnotation.class));

Again, you can search for classes, methods and fields, based on where they are declared or what annotations they have. For classes, AppArchive gives you access to a collection of ClassInfo and AppArchiveConfig gives you access to a collection of ClassConfig. Similarly for methods and fields. Above, we have seen a simple way of adding annotations. There are more elaborate ways for advanced use cases, for which you need to create instances of AnnotationAttribute or AnnotationAttributeValue. In such a case, an extension can declare a parameter of type Annotations, which is essentially a factory for these types. Similarly, you can declare a parameter of type Types, which serves as a factory for instances of Type.


The most important parameter type you can declare for extensions in this phase is SyntheticComponents. It allows you to register synthetic beans and observers. Note that this API has one significant unsolved problem: how to define the construction and destruction function for synthetic beans, or the observer function for synthetic observers. This needs to work at build time, so we’re entering the realm of bytecode generation and similar fun topics. We have some ideas here, and we’ll work on adding them to the API proposal. You can also declare all the parameters that give you access to ClassInfo, MethodInfo and FieldInfo, as described above, including AppArchive. What’s more interesting, you can also inspect existing beans and observers in the application. This is very similar to inspecting classes, methods and fields, so let’s take it quickly. You can declare a parameter of type Collection<BeanInfo<? super MyService>> to obtain information about all beans in the application that have MyService or any of its supertypes as one of the bean types. (Note that this example is not very useful, as Object is one of the supertypes of MyService, and all beans typically have Object as one of their types.) Similarly, you can declare a parameter of type Collection<ObserverInfo<? extends MyEvent>> to obtain information about all observers in the application that observe MyEvent or any of its subtypes. All the other combinations are of course also possible, and if that is not enough, there’s AppDeployment, which gives you more powerful querying features, similarly to AppArchive. You can find beans based on their scope, types, qualifiers, or the declaring class. Similarly with observers, you can filter on the observed type, qualifiers, or the declaring class.


The most important parameter type you can declare for extensions in this phase is Errors. It allows you to add custom validation errors. What can you validate? Pretty much anything. You can get access to classes, methods and fields, just like in the @Enhancement phase, and you can also get access to beans and observers, just like in the @Synthesis phase. This includes both the Collection<SomethingInfo<...>> approach, and AppArchive / AppDeployment way. Error messages can be simple String`s, optionally accompanied by a `DeclarationInfo, BeanInfo or ObserverInfo, or arbitrary `Exception`s. In case a validation error is added, the container will prevent the application from successfully deploying (or even building, in case of build time implementations).


You have just finished a deep dive into our current Build Compatible Extensions API proposal.

Together with the API proposal, we also developed a proof-of-concept implementation in Quarkus, so that we know this API can be implemented, and that it is indeed build-time compatible. This proof of concept focuses solely on the @Enhancement phase, but that should be enough for now. It’s also worth noting that there is nothing Quarkus specific about the API. We believe (and our goal) is that any CDI-Lite implementation could adopt it using a variety of implementation strategies.

We’re publishing the Quarkus fork in the form of a GitHub repository so that you can also experiment with it. Please bear in mind that the POC implementation is very rough and definitely is not production ready. It should be enough to evaluate the API proposal, though. Here’s how you can get your hands on it:

git clone
cd quarkus-fork
git checkout experiment-cdi-lite-ext
./mvnw -Dquickly

Wait a few minutes or more, depending on how many Quarkus dependencies you already have in your local Maven repository. When the build finishes, you can add a dependency on io.quarkus.arc:cdi-lite-ext-api:999-SNAPSHOT to your project and play. Don’t forget to also bump other Quarkus dependencies, as well as the Quarkus Maven plugin, to 999-SNAPSHOT! As mentioned before, we are very keen on hearing your feedback. Please file issues in the CDI GitHub repository with label lite-extension-api. Let’s work together on making these new Build Compatible Extensions a reality!

CDI for the future

Posted by Antoine Sabot-Durand on Mar 09, 2020 | Comments
inside quark web

A few months ago, CDI turned 10! Yes, CDI 1.0 was released 10 years ago and is today one of the most successful specifications in Java EE and now Jakarta EE. Providing a very efficient programming model and elegant means to integrate with 3rd party technology it rapidly became the Java EE cornerstone. As other specifications were adopting its programming model, CDI brought a unified way to write Java EE code and made the platform more consistent than before. So when MicroProfile was launched nearly four years ago, it was obvious that CDI should be part of the core platform along with JAX-RS and JSON-P. Today, MicroProfile programming model relies deeply on CDI and the platform success is partly due to the consistency CDI brings to developer experience.

Yet, CDI was designed more than 10 years ago at a time when monolithic applications were deployed as ears and wars sharing a highly dynamic yet monolithic application server. Things have changed, where containers are immutable, obviating the need for hot redeploy and dynamic discovery, and aspects that we used to rely on traditional application servers for, such as availability and redundancy are now handled using cloud orchestration with kubernetes. We also have seen a shift from monolithic apps to a greater emphasis on decoupling and resilience through microservices. These factors have given rise to the “single app stack”, where the framework and the application are fused as one. With traditional application servers, applications had to be dynamic because they needed to differentiate their needs on shared application server infrastructure, where configuration and resources applied equally to all applications. With single application stacks, applications can express their needs more statically because they are scoped to a single application.

In addition, today’s deployments require increasing efficiency to achieve cost reduction, whether deploying to cloud providers or in-house virtualized data centers. A single application server instance is often replaced by a dozen microservice “single-app stack” instances, with double or triple that amount to achieve redundancy. CDI, as it is today, is not suited for this cloud ready approach. Some of its features imply a rather heavy resource consumption (both boot-time and memory usage) in its implementations.
This blog post covers some of my CDI vision for the future to make the specification relevant for the next 10 years.

How to make CDI cloud ready?

If we want to make CDI a cloud ready specification we have to look into all of its requirements that impact memory, cpu, and more broadly performance. As part of that we should revisit which capabilities and features are still required, since as mentioned above application architecture and deployment environments have changed significantly over the years. Although, that alone is not enough, we need to ensure CDI is flexible and adaptable enough to allow for innovative implementation approaches, such as build-time injection wiring.

Of course, It should be possible to implement runtime-based approaches in a more efficient manner, and in many ways these goals are complimentary. One example of this is overly aggressive bean discovery and thus the extensive type scanning required by CDI during initialization. While bean discovery allows seamless integration in that 3rd party non-CDI classes can be discovered as bean, storing state and generating events on classes which were never intended to be a bean is very costly. That’s the reason why, when we introduced CDI for Java SE in CDI 2.0, we provided a way to disable bean discovery and let the developers explicitly declare classes that should become beans or allow the creation of synthetic beans before launching the container.

Eclipse MicroProfile Challenges

The CDI spec was originally written with Jakarta EE integration in mind, and as such, it assumes the full platform is available and thus the spec and the TCKs require JSF, EL, and EJB. This is not the best fit for MicroProfile, since it targets microservice usage patterns. Under MicroProfile not even Servlet is required, let alone EJB, EL, and JSF. Likewise, MicroProfile can’t be based on CDI SE, since SE does not include JAX-RS integration, which is essential for REST based microservices.

To solve this problem, the MicroProfile specifications effectively depend on a subset of the CDI spec, explicitly noting the above API elements are to be excluded. This is awkward and has led to confusion about how one achieves compliance.

Ultimately, the problem is that the CDI spec couples too many elements together. We need greater flexibility to allow for as many different platforms and environments to adopt and build off the standard.

Introducing CDI Lite

If you followed the CDI 2.0 expert group work a few years back, this title should ring a bell. CDI Lite was in the air back then, since we did expect some future need of added flexibility, but due to a lack of time and clear target we postponed its addition to the spec.

With the CDI programming model being core to MicroProfile, it seems obvious that its use cases should be considered as a first-class usage of CDI. Further, we should enable CDI composition into any other platform, such as future Jakarta EE profiles, or combining it with other frameworks on top of plain old Java as a contribution to future innovations within the Java ecosystem.

So what should CDI Lite’s goal be? In my opinion we should define it as: “the core subset of CDI features that enables the greatest number of CDI implementations, CDI usage within the Java ecosystem, and opens the door to innovation, notably build-time injection approaches.”

Not only would this improve the existing CDI ecosystem, it opens the door to many other interesting use cases such as:

  • Transcompilation. It becomes possible to compile Java based injection into other language environments, such as Javascript. This is currently possible with Dagger and Kodein for Kotlin but not CDI

  • Mobile platform support. By enabling build time injection, it also becomes possible for an implementer to target mobile platforms such as Android.

  • Native Compilation. By enabling build-time Java, the door is also opened to generating optimal native compiled images utilizing static compilers such as the GraalVM project.

How to add CDI lite to the spec?

Today, thanks to CDI 2.0 work, the spec is split into 3 parts: core, CDI for SE and CDI for EE.

cdi2 layers
Figure 1. Current CDI spec layering

Users and implementers are already familiar with the notion of different “flavors” for CDI. Adding CDI Lite implies some work but the spec is already well organized to support such a change. Roughly, CDI lite should be defined as the core subset of which core, EE and SE extend. Further the EE spec integrations themselves can be defined in such a way so that each framework’s integration is optional, allowing any combination such as a standalone JAX-RS implementation with CDI support. Additionally this would enable future additional Jakarta EE profiles beyond just web and full.
This evolution would split Core CDI in CDI Lite and “Heavy CDI” as shown below. The CDI lite part could benefit CDI for Se as other platforms like MicroProfile.

cdi3 layers
Figure 2. Spliting CDI Core in lite and full

CDI Lite Scope

The essential fundamental core of CDI is the programming model exposed to users which enables uniform annotation driven injection and further supports contextual state driven injection. Just the annotations defined in JSR-330 are not enough, there is also the need for a number of other common patterns and usages to make the framework complete.

Support popular CDI features like:

  • Beans (class, producers and synthetic beans)

  • CDI DI (typesafe resolution, qualifiers, dynamic lookup)

  • Most built-in scopes (singleton, application, request, and dependent)

  • Contextual instances and their lifecycle

  • Interceptors

  • Events

Other features may be added but may not have reached broad adoption like decorators, transactional events or specialization, so additional discussion would be needed.. Ideally we would utilize the opportunity to reduce technical debt, since each increases code complexity, and some of these underused capabilities are a major source of bug reports:

  1. Decorators have 67 issues in the RI alone

  2. Specialization has 28 RI issues and 6 open spec issues

Outside CDI Lite Scope

A number of features are only relevant to particular framework integrations. For example SessionScope is only relevant if the runtime environment implements Servlet (HttpSession), and ConversationScope is incomplete without EL and JSF. These technologies are not needed in a microservice scenario, as is the case in MicroProfile, and so should not be required.

Another capability that should be excluded from CDI Lite is portable extensions, but still part of CDI Full. Portable extensions run in opposition to the goals described above, since they are inherently a runtime-only contract which mandates a very specific container initialization lifecycle. As an example, portable extensions are often stateful, but they are not serializable, and any state they have can be passed into other beans or as part of lifecycle events that are required to occur. Further they are allowed to manipulate almost anything pertaining to a bean at just about every phase of the CDI lifecycle. These factors effectively preclude any implementation that aims to pregenerate wiring at build time. Yet extension implementations rarely need such an open-ended do-anything-you-want API.

Instead, the CDI-Lite could address these concerns through purpose-built SPIs, such as introducing a new explicit way to register annotated types and beans. It’s already partly done in CDI for SE in which users can programmatically add synthetic beans without portable extensions.

All of the elements outside of the CDI-Lite scope would still be a part of the full specification, as the intention is not to affect existing implementations, only to open the door to new approaches and new implementations.

In the end, we would end up with a much more flexible standard that benefits everyone and carries over the same powerful programming model to new use-cases while bringing improved efficiency to modern cloud deployment scenarios.
CDI Llite introduction wouldn’t be the pretext to deprecate existing features but to make the framework more modular and ready for all todays use cases and make it ready for future evolution.


As you may guess this spec evolution idea will require a lot of analysis and discussion. Should the Jakarta Contexts and Dependency Injection Project agree to go this way, we could imagine starting work on this new CDI version, but as usual, feedback from the community is very important to us. So feel free to share your thoughts in the comments of this post.

Thanks for helping us keep the CDI programming model around for the next 10 years!

CDI 2.0 is released

Posted by Antoine Sabot-Durand on May 15, 2017 | Comments

Our JCP Expert Group is pleased to announce the release of Contexts and Dependency Injection for Java 2.0.

Specification, reference implementation (JBoss Weld 3.0.0.Final) and TCK can be downloaded here.

What is CDI?

CDI is one of the major Java EE specification.

It was introduced with Java EE 6 in 2009, updated  for Java EE 7 and now with version 2.0 it is ready for Java EE 8 as well for Java SE or other platform like Micropofile.

CDI defines a powerful set of complementary services that help improve the structure of application code.

  • A well-defined lifecycle for stateful objects bound to lifecycle contexts, where the set of contexts is extensible

  • A sophisticated, typesafe dependency injection mechanism, including the ability to select dependencies at either development or deployment time, without verbose configuration

  • Support for Java EE modularity and the Java EE component architecture?the modular structure of a Java EE application is taken into account when resolving dependencies between Java EE components

  • Integration with the Unified Expression Language (EL), allowing any contextual object to be used directly within a JSF or JSP page

  • The ability to decorate injected objects

  • The ability to associate interceptors to objects via typesafe interceptor bindings

  • An event notification model

  • A web conversation context in addition to the three standard web contexts defined by the Java Servlets specification

  • An SPI allowing portable extensions to integrate cleanly with the container

Major features included in CDI 2.0

This CDI 2.0 includes important changes for the platform.

A lot of other small feature will be delivered. Refer to the coming release notes to check them all.

Start using CDI 2.0 today with Weld 3.0

To develop your CDI 2.0 code just add cdi-api 2.0 your pom.xml.


You can then run your code on Java SE or on WildFly 

Running on Java SE with Weld SE

You can then run your code on Java SE thanks to Weld SE, just add this dependency to your project:


You can then, bootstrap the CDI container in your code like this

public static void main(String... args) {
    try(SeContainer container = SeContainerInitializer.newInstance().initialize()) {
        // start the container, retrieve a bean and do work with it
        MyBean myBean =;
    // shuts down automatically after the try with resources block.

Running on Java EE by patching WildFly

We also provide a patch for WildFly 10.1.0 to update Weld and thus CDI version on JBoss WildFly.

To do so just download and unzip WildFly 10.1.0.Final, then download the patch (don’t unzip it), go to the <WILDFLY_HOME>/bin  directory and patch the server with the following command:

./ --command="patch apply <PATH_TO_PATCH>/"

you should obtain the following result in the console:

    "outcome" : "success",
    "result" : {}

Your WildFly server is now patched to use CDI 2.0 and Weld 3.0.0.Final.

GlassFish 5.0 with CDI 2.0 support should be release in the coming weeks.

Stay tuned

We’ll provide more article on CDI 2.0 new features so stay tuned by following @cdispec twitter account.

CDI 2.0 is in public review

Posted by Antoine Sabot-Durand on Feb 01, 2017 | Comments

CDI 2.0 is now in public review status, you can now grab the PRD of the spec ot download the Javadoc.

Major features included in CDI 2.0

This CDI 2.0 includes important changes for the platform.

A lot of other small feature will be delivered. Refer to the coming release notes to check them all.

RI is also available

We also provide a pre-release for the RI and API, thus You can start testing CDI 2.0 with Weld 3.0 CR1 that you can download here.

To develop your CDI 2.0 code just switch cdi-api in your pom.xml to this version:


We also provide a patch for WildFly 10.1.0 to help users evaluate CDI 2.0 under a Java EE application server.

To do so just download and unzip WildFly 10.1.0.Final, then download the patch (no need to unzip it), go to the <WILDFLY_HOME>/bin directory and patch the server with the following command:

./ --command="patch apply <PATH_TO_PATCH>/"

you should obtain the following result in the console:

    "outcome" : "success",
    "result" : {}

Your WildFly server is now patched to use CDI 2.0 and Weld 3.0 CR1.

Happy testing!

CDI 2.0 Beta 1 is available

Posted by Antoine Sabot-Durand on Jan 05, 2017 | Comments

After more than 2 years of work CDI 2.0 is around the corner. Its feature list is now complete and Beta for RI (Weld 3.0 Beta1) is available for download. This post lists what’s included and give you some insight on the final release and what comes next

Give it a try

First, you may want to start testing the RI and discover the new API, all the resources are listed below:

  • You can browse the spec document in html or pdf

  • If you prefer Javadoc you can read it here

  • CDI 2.0 Beta API is available on Maven Central. You can also download the jar here

  • Last but not least, the reference implementation (Weld 3.0 Beta 1) is downloadable here and can be also be used from Maven

Major features included in CDI 2.0

This CDI 2.0 includes important changes for the platform.

A lot of other small feature will be delivered. Refer to the coming release notes to check them all.

Release agenda

If everything stays on track we should send the PFD (proposed final draft) with TCK and RI to the JCP soon.

If the JCP ballot is green, CDI 2.0 final could be released before end of February.

What’s next

Our release plan for CDI 2.0 has always been to deliver it before Java EE 8 to let other spec grab our new features and make the best out of them.

We are still considering to start working on CDI 2.1 to clarify or add the few features needed for Java EE 8 and beyond.

Stay tuned.