In 2025, the tech industry is navigating turbulent times. Macroeconomic uncertainty, AI-driven disruption, and ongoing layoffs at major tech companies have created an environment where every budget line is under scrutiny. Developer Relations (DevRel) has been hit particularly hard. Many teams saw downsizing in 2023, 2024, and early 2025, not because they weren't valuable, but because their impact is notoriously difficult to measure in business terms. As one industry voice put it, "Developer Relations experienced significant downsizing and layoffs", driven largely by the challenge of proving ROI.
Yet this isn't the full story. Even amid turbulence, DevRel continues to bring immense value. Research shows that 89% of DevRel teams struggle to prove ROI with traditional marketing-style metrics, precisely because developers discover products through nuanced touchpoints - docs, repos, blogs, events - that resist simple attribution. And yet, high-performing DevRel teams demonstrably drive activation, engagement, and adoption, deliver trusted advocacy that accelerates adoption, and influence product-market fit when integrated strategically. Quality developer content has become more valuable than ever in a noisy landscape, while ecosystem research confirms DevRel strengthens retention, efficiency, innovation, and complementarity.
The paradox is clear: DevRel is vital, but also vulnerable. Its contributions are undeniable, yet its value often remains invisible to decision-makers. To thrive in this market, DevRel must bridge the gap between community impact and business outcomes - making its worth visible in the language leadership understands. That's where the DevRel Value Model comes in.
This blog post will walk you through the way we think about the DevRel Value Model at Cloudinary. I will also give you the background - where it came from and how it was developed, and where it stands today. Kindly note that some of the values, weights, and calculations were changed for demonstration purposes and do not reflect the actual values used at the organisation.
Developer Relations has long struggled to articulate its impact. Communities thrive, events buzz, and content reaches thousands, but leadership is still left wondering: "How does this translate into business outcomes?" Unlike sales or marketing, where revenue attribution is linear, DevRel's contributions are different - making them easy to undervalue. Let's face it, oftentimes, the value of DevRel is not immediately apparent to leadership and nobody can blame them. If presented inappropriately, all leadership will see is a bunch of numbers that don't add up to anything meaningful, but definitely take away money from the company - money spent on sponsorship, content, travel, and salaries. DevRel must find a way to translate its impact into business outcomes that leadership can understand and appreciate.
Not all companies are the same. In some companies, the value that DevRel brings is evident - usually because the leadership team has done developer relations before or was involved in the company's early days. In other companies, DevRel is still a new function and its value is not yet clear. Or, even if it has been around for a while, it is now increasingly important to show its value. In these cases, DevRel must find a way to demonstrate its value through data and metrics.
Alright, let's get to it - why did we end up creating a model in the first place? Our journey started with a simple question: "Was that event worth sponsoring?" The reason why we wanted to answer this question was pure and simple - this is the question that we kept on getting from the management team. It wasn't spelled out specifically, but it pretty much read as "Why are we doing so many events and where's the ROI? How do you know if these events were successful?"
Answering this question often relied on someone working in DevRel using a "gut feel" (which is often another "metric", albeit impossible to measure). "Yeah, this event was good because [insert reason here]." But we needed to do a better job; we had to realistically measure success and answer the aforementioned burning questions.
We ended up with something simple - analysing costs against measurable engagement: talk and workshop attendees, newsletter signups, and quality booth conversations. This shift towards structured measurement planted the seed for the DevRel Value Model.
At its core, the DevRel Value Model assigns weights to activities. A workshop attendee is more valuable than a talk attendee; an engaged conversation carries more weight than a passive booth visitor. If you have attended events, you know exactly what I mean. You could have 200 people sitting in a room listening to a talk, but they are there because, most of the time, there's nowhere else to go. Maybe half of them are somewhat interested in the topic, and half of those are actually so interested in the topic that they are willing to try out your product to experiment with it. (I have to add that I am a firm believer in leading with use-case/pain point first and weaving in the product as a solution, as opposed to leading with product first in my DevRel activities.)
Now, it is very likely that the smaller portion of talk attendees would be the same set of people who would attend your workshop, and therefore a workshop attendee carries more "weight" than a talk attendee because the engagement is a lot better. You can apply the same logic to other things you do at events. These typically include activations - some of which actually require registration to your platform, but sometimes it's only a newsletter sign-up.
The model really first started with a boolean outcome answering the question "Was the event successful for us?" Yes / No. But the question is, how did we do it? How were we able to answer this question?
It is not always possible, or even necessary, to begin by assigning monetary values to Developer Relations activities. In many cases, what matters most is being able to establish whether an event was successful in relative terms, and to compare that success across multiple engagements. This can be achieved by introducing a weighted scoring system that reflects the varying levels of impact different activities have.
The first step is to identify the key activities that typically occur at your events. These should be the touchpoints that signify meaningful engagement with your audience. For instance, you might wish to track:
Each of these activities signals a different degree of engagement. A person who attends a talk may gain some awareness of your product, but a person who joins a workshop is demonstrating a greater willingness to commit time and focus. Someone who completes an activation has moved further still, actively interacting with your product or ecosystem. (Assuming, of course, that your activation involved a registration and the usage of your product(s).)
Once you have defined the activities, the next step is to assign weights that reflect their relative importance. The principle here is straightforward: deeper engagement should carry more weight than lighter engagement. As mentioned before, these are just samples and not the exact values that we used at Cloudinary, but they should give you a good idea of how to assign weights.
You can think of these weights as a way to quantify the value of each activity. For example:
This weighting acknowledges that not all attendees provide equal value. It allows you to treat a single activation as more impactful than several passive listeners in a talk.
With weights in place, you can now calculate a weighted score for your event. Multiply the number of participants in each category by the relevant weight, and then sum the totals.
Let us take an example:
Total Score = 500 points
This figure, though not expressed in pounds or dollars, now provides a composite measure of the event's impact.
In order to compare events of different scales, it is helpful to normalise the score against the cost of the event. Divide the weighted score by the total cost, and you will obtain a value-per-pound index (or value-per-dollar or value-per-[insert-your-country's-currency]).
Using the example above:
If another event cost £10,000 but produced a weighted score of 700, the value index would be 0.07 points per £. In this scenario, despite the higher attendance and overall score, the second event was less efficient in relative terms.
A baseline is not something you have from the outset; it emerges once you have run several events and compared their value indices. (And this can easily be built retrospectively.) Let us look at three events side by side.
| Event | Talk Attendees (×1) | Workshop Attendees (×3) | Activations (×5) | Total Score | Event Cost | Value Index (points per £) |
|---|---|---|---|---|---|---|
| A | 200 = 200 | 50 = 150 | 30 = 150 | 500 | £5,000 | 0.10 |
| B | 150 = 150 | 80 = 240 | 60 = 300 | 690 | £6,000 | 0.115 |
| C | 300 = 300 | 20 = 60 | 10 = 50 | 410 | £7,000 | 0.058 |
With three events measured, you can begin to establish an initial benchmark:
This figure becomes your baseline. Events performing above it (like Event B) can be considered successful and efficient, while events below it (like Event C) may require rethinking or reallocation of resources. Over time, as more events are added, the baseline becomes stronger and more representative.
This approach enables you to say things such as: "This £5,000 event performed 25% above our baseline." That statement carries weight in discussions with leadership, even without assigning explicit monetary values to each activity.
The strength of a weighted model without monetary equivalents is threefold:
In short, you do not need to begin with pounds or dollars to demonstrate the value of DevRel. A weighted model provides a robust, transparent, and scalable method of showing which events drive the most meaningful engagement, enabling you to demonstrate success with confidence while laying the groundwork for more advanced models in the future.
Before we get to how we expanded this model and how we added theoretical monetary ROI, I'd like to mention that this version of the model works best if your DevRel activities are at the Education & Enablement or Adoption cycle. I like to think of DevRel as having a funnel which goes from Awareness → Education & Enablement → Adoption → Retention → Advocacy (which maps to a developer's journey as well: first hear about → learn → build → stay engaged → champion). The reason for this is that in earlier stages - when you are making developers aware of your product and ecosystem - it may be necessary to burn through some cash without always looking at the ROI. But at later stages, when you have the developers and users, the question is rightly asked: "How does DevRel impact the business in terms of revenue?"
At Cloudinary we were in the Adoption / Engagement & Retention phase. And once we saw that the previous, simple model worked, it was time to refine it further to reflect real-world impact and introduce monetary ROI.
As we have established, at its core, the DevRel Value Model assigns weights to activities. This weighting ensures that the quality of engagement is reflected alongside the quantity.
The next step was to introduce monetary equivalents. Working closely with finance, we asked a simple but vital question: "What is a registration on our platform worth to the business?" Finance teams already calculate customer acquisition costs, pipeline values, and average lifetime value. By aligning with their expertise, we were able to agree that each registration can be valued at $10. (Note this is not the actual value we ended up with, I am using this as an example for the purpose of this article.)
But here comes the catch: at events, we do not know exactly how many developers go on to register on our platform (and eventually end up using our product). Attribution is messy: some attendees sign up weeks later, some never do, and some are already users. This is precisely why the model relies on weights as proxies for registration likelihood. Different types of engagement correlate with different levels of intent. A person who attends a talk has shown some interest, but the barrier to entry is low. Someone who joins a workshop has demonstrated a deeper commitment of time and focus, suggesting they are more likely to move forward. An activation, such as completing a demo or signing up at the booth, is an even stronger signal and the closest proxy to an actual registration. We have further extended this by adding things like brand awareness (i.e. how many people would have seen our logo at the event - this should be a conservative percentage of the total number of attendees - say 15%).
By using the aforementioned weight-based system, we can translate these different forms of engagement into registration equivalents. The idea is not to claim exact attribution but to create a reasoned proxy. For example:
It should also be noted that this model can and should accommodate DevRel Qualified Leads (DRQLs) - instances where a concrete lead has been generated directly through developer relations (and passed on to sales to handle). These deserve their own high-value weighting, as they are far more closely tied to commercial outcomes than other forms of engagement.
Let us apply this approach to a hypothetical event. Suppose the results were:
Converting to registration equivalents:
Total = 80 registration equivalents
If each registration is agreed with finance to be worth $10, this event generated a theoretical value of $800.
Here are a few more examples with additional activities added in:
| Event | Talk Attendees (÷10) | Workshop Attendees (×0.6) | Activations (×1) | Newsletter Signups (×0.5) | Brand Awareness (Reach × 0.5) | Total Registration Equivalents | Theoretical Value ($10 each) |
|---|---|---|---|---|---|---|---|
| A | 200 ÷ 10 = 20 | 50 × 0.6 = 30 | 30 × 1 = 30 | 40 × 0.5 = 20 | - | 100 | $1,000 |
| B | 150 ÷ 10 = 15 | 80 × 0.6 = 48 | 60 × 1 = 60 | - | 15% of 1,000 × 0.5 = 75 | 198 | $1,980 |
| C | 300 ÷ 10 = 30 | 20 × 0.6 = 12 | 10 × 1 = 10 | - | - | 52 | $520 |
This figure is not to be mistaken for direct revenue. Instead, it represents a directional ROI, a way of expressing impact in terms that resonate with leadership while still acknowledging the limits of attribution. The real value lies in shifting the conversation: rather than relying solely on abstract community metrics such as attendance or views, we can present numbers that fit naturally into business discussions. And once leaders see DevRel's activities in this light, they are far more likely to understand, recognise, and support the function.
Please note that the model does not take other expenses into consideration such as travel, marketing (swag), or other costs associated with DevRel activities.
What began as a way to measure event ROI soon grew broader. We applied the same principles to content creation and other activities done by DevRel: blog views, YouTube plays, and newsletter sign-ups could all be weighted, valued, and expressed in the same model. This expansion gave DevRel a holistic framework for evaluating its diverse activities.
There is no shortage of frameworks in the DevRel space. Many highlight community tiers, levels of engagement, or the health of ecosystems. These are useful lenses, but they often stop short of translating outcomes into the language that most organisations ultimately care about: finance. This is where the DevRel Value Model stands apart. It does not dismiss those qualitative insights; rather, it complements them by adding a layer of financial equivalence. In doing so, it builds a bridge between the often intangible work of community-building and the concrete expectations of company leadership.
For practitioners, this shift is immensely valuable. DevRel professionals frequently ask themselves: "Was this initiative worth the effort?" Until now, the answers were subjective or anecdotal, relying on gut feeling, applause in a room, or social media buzz. The Value Model replaces intuition with a structured method. By converting activities into weighted registration equivalents - and then mapping those equivalents against a finance-agreed value - it provides a consistent yardstick. Events, workshops, content campaigns, and online engagement can now be compared side by side. That comparison enables practitioners to make data-informed decisions about where to invest their time, energy, and limited budgets.
Please note that I am not trying to advise against using other metrics to measure success - you should still look at raw data such as the number of views on a YouTube video and, based on that, see what works and what doesn't for your organisation. However, the Value Model offers a structured approach to evaluate the impact of DevRel initiatives in terms of financial equivalence.
For leadership, the model does something even more important: it creates a single, comprehensible figure that speaks their language. Executives are accustomed to evaluating functions based on financial metrics, whether it be sales pipeline, marketing-qualified leads, or customer acquisition costs. DevRel has historically struggled to enter that conversation because its outputs do not align neatly with revenue attribution. By presenting a theoretical ROI, the Value Model gives leaders a familiar anchor point. They may not take the number as literal revenue, but they understand its intent - and that understanding reshapes DevRel from a "nice-to-have" to a strategic contributor.
It is crucial to underline that this model does not pretend to offer exact revenue attribution. It is theoretical by design. Its role is not to compete with the precision of a sales funnel, but to offer directional visibility where none previously existed. That distinction is powerful: leadership does not expect DevRel to deliver the same metrics as sales, but they do expect clarity. A theoretical ROI, even with its imperfections, reframes the conversation from one of ambiguity to one of credibility.
With tightening budgets and AI reshaping the industry, every function is asked to justify its existence. DevRel cannot depend on community goodwill alone; it must demonstrate value in clear, structured terms. By making the invisible visible, the DevRel Value Model helps secure DevRel's seat at the strategic table.
In practice, this means that DevRel teams can finally sit at the same table as their counterparts in sales, marketing, and product. When everyone is speaking the same financial language - even if DevRel's figures are directional rather than definitive - conversations change. Budgets are defended more effectively, priorities are debated on equal footing, and DevRel's contributions are no longer invisible. That is what makes this model different: it takes what has always been difficult to measure and expresses it in a form that both practitioners and executives can act upon.