Author: Siva Jagadeesan

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Velocity

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Velocity

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    Velocity is often used in the world of software development and is often a go-to team performance metric. In its basic sense, velocity calculates the number of story points completed in a sprint. As a planning tool, it can be quite useful. While it can be a useful planning tool, relying on velocity as a performance metric for individual engineers or teams can introduce several pitfalls. These pitfalls can do more harm than good.

    The Origins of Velocity

    Velocity was initially intended to be a team-level metric, used within Agile frameworks like Scrum. This allows teams to gain insight into how much work they can complete in a sprint and to calibrate their workload accordingly. However, the simplicity of the metric encourages organizations to abuse it as a measure of productivity.

    Pitfall 1: Encouraging Quantity over Quality

    While velocity can be a helpful performance metric, it can lead to engineers focusing more on finishing more story points than on doing quality work. Such situations will lead to technical debt, and general deterioration of code quality. Engineers may also stop taking on complicated but important work that doesn’t produce a lot of story points.

    Pitfall 2: Misaligned Incentives

    The definition of the story point can vary greatly among teams when it comes to complexity, effort or risk. When velocity is compared across teams or individuals it leads to unfair comparison and misaligned incentives. One team may exaggerate story points to create a “fake” velocity, another one may downplay story points to maintain consistency.

    Pitfall 3: Ignoring Contextual Factors

    Velocity does not factor in external elements like:

    • Ad-hoc bugs or incidents that can’t wait.
    • Collaboration around non-ticketed work, such as mentoring, working on a whiteboard, or writing documentation.
    • Variability in tasks, where a single, highly impactful feature might take more time but doesn’t align with the velocity model.

    These contextual nuances are crucial to understanding an engineer’s true contribution but are invisible to these velocity metrics.

    Pitfall 4: Fostering a Culture of Competition

    When velocity is tied to individual performance evaluations, it can foster unhealthy competition. Engineers may prioritize individual tasks over collaboration, undermining team cohesion. This siloed mindset can lead to inefficiencies and a lack of shared ownership.

    Conclusion

    Velocity was never intended to measure individual performance, and yet this is how it is most often used. When misused, this can cause bad habits, poor quality of code, and ultimately lose focus on customer value. By rethinking metrics and prioritizing a holistic view of contributions, organizations can foster a healthier, more productive software engineering culture. Let’s move beyond velocity and measure what truly matters.

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Utilization Rate

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Utilization Rate

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    In software development, there’s always a push to measure and improve performance. Managers often look for clear indicators of productivity, and metrics like utilization rates seem like an obvious choice. But is keeping engineers fully occupied really the best way to gauge success? The short answer is no. In fact, it can harm team dynamics, creativity, and long-term outcomes.

    Drawing from my experience in software development and leadership, I’ve seen firsthand how this metric can misrepresent engineers’ true contributions. Let’s dive into why Utilization Rate is a flawed measure and explore better alternatives.

    Why Utilization Rate is a bad metric

    • Being Busy doesn’t mean Being Effective : When you absolutely have to be utilized, busyness is rewarded, not value creation. Engineers could waste time on low-priority items or self-inflate their workload in an effort to appear productive. If much of the focus is on the output, it can result in considerable burnout and a decline in the quality of the output. The most critical contributions — designing better systems, mentoring teammates, or debugging complex issues — often don’t produce immediate results. These activities are often the ones with the least value in environments where everything revolves around utilization.
    • Flexibility goes out the Window: A team running at full capacity has no room to maneuver. What do you do when an urgent bug needs fixing, or a last-minute request pops up? Without slack, everything comes to a grinding halt and priorities start to clash. A little breathing room can make all the difference when dealing with the unexpected.
    • Creativity gets Stifled: Innovation is not forged in a pressure cooker. Engineers require the time to experiment, reflect, and improve their thoughts. A team that has been pushed to its maximum will not have the mental bandwidth to innovate or offer disruptive solutions. Instead, they’ll take the easiest route to check boxes to meet deadlines.
    • Burnout becomes Inevitable: When engineers are always at capacity, they eventually run out of steam. Burnout renders them unable to perform to the best of their ability, and it impacts the overall harmony and efficiency of a team. The hidden costs to the organization are high turnover and decreased motivation.
    • It’s about Outcomes, Not Hours: Utilization metrics are about activity, not impact. For instance, an engineer who dedicates only 50% of their time to building a feature that delights users creates more value than someone operating at 100% on tasks with little impact. It’s not the amount you’re doing — it’s the results you’re getting.

    So this is a Better Way to Measure Productivity

    Instead of chasing throughput, the organizations should adopt strategies that will measure meaningful contributions. Here’s how:

    • Focus on Results Don’t worry about how jammed up a person is. Instead, measure results. Data-based metrics — customer satisfaction, system reliability and time-to-market for features — paint a prospective picture of success.
    • Make Room for Slack Allow time for thinking, learning, and roaming. Slack time isn’t unproductive time—it’s what allows teams to be resilient and innovative.
    • Quality over Quantity Focus on clean, maintainable code and well-thought out solutions. This promotes practices that minimize technical debt and build toward the long-term vision. Productivity is not about producing as many outputs as you can; it’s about producing the right outputs at a high level.
    • Establish Sustainable Work Practices Promote a culture where work-life balance matters. Encourage engineers to take breaks, set boundaries, and avoid crunch time. Healthy teams are more productive in the long run.
    • Long-Term Focus Make sure every task supports bigger organizational objectives. Engineers show up better when they can see their work makes a difference. That alignment maintains motivation and directs effort where it best serves the organization.

    Conclusion

    While that sounds great on paper — 100% utilization — it’s an erroneous approach in practice. Great teams balance load with flexibility, and focus on meaningful outcomes not busywork. Reimagining metrics and aligning them with the true drivers of success lays the foundation for engineers to thrive and achieve remarkable performance.

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Code Commits and Code Reviews

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Code Commits and Code Reviews

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    In the world of software development, the performance of engineers often becomes a topic of discussion. Managers and organizations seek tangible ways to evaluate contributions, and metrics like code commits and code reviews are frequently considered. However, relying on these metrics to assess software engineers’ performance is not only reductive but also harmful to team dynamics and long-term project success.

    Why Code Commits and Code Reviews are Bad Metrics

    1. Code Quantity Does Not Equate to Code Quality

    Using the number of commits as a measure of productivity incentivizes quantity over quality. Engineers may feel pressured to create many small commits or inflate their contribution with unnecessary changes to appear productive. This can lead to bloated codebases, technical debt, and reduced focus on writing clean, maintainable code.

    Moreover, not all engineering tasks require the same amount of code. Refactoring, debugging, and optimizing existing code often result in fewer commits but are critical for the health and scalability of a project.

    2. Code Reviews Reflect Collaboration, Not Individual Skill

    While code reviews are essential for maintaining code quality and knowledge sharing, the number of reviews performed or comments made does not directly correlate with an engineer’s skill. Some engineers may excel at reviewing complex algorithms, while others may focus on broader architectural decisions or mentorship through reviews.

    Incentivizing reviews as a metric can lead to superficial or excessive feedback that adds little value. Effective code reviews prioritize quality insights, not the volume of comments.

    3. Context Matters—And Metrics Lack It

    Metrics like commits and reviews fail to capture the context of an engineer’s contributions. For example:

    • Role Diversity: Senior engineers often spend more time mentoring, designing architectures, or aligning with stakeholders, leading to fewer commits.
    • Project Complexity: Some projects require deep problem-solving and innovation, which may not result in immediate code output.
    • Cross-Functional Contributions: Engineers may contribute through documentation, automation, or process improvements—activities that are invaluable but not directly tied to code metrics.

    4. Encourages Unhealthy Competition

    Metrics-based evaluation fosters competition rather than collaboration. Engineers may prioritize their individual output over helping teammates or addressing broader team goals. This undermines the collaborative spirit essential for agile and DevOps practices.

    5. Ignores User Impact

    The ultimate goal of software engineering is to deliver value to users. Metrics like commits and reviews focus on process rather than outcomes. An engineer who designs a feature that significantly improves user satisfaction or revenue may have a smaller number of commits compared to someone resolving routine bugs but has made a far greater impact.

    A Better Approach to Evaluation

    Instead of relying on shallow metrics, organizations should consider holistic and qualitative methods to evaluate software engineers, such as:

    • Peer and Manager Feedback: Collect feedback on collaboration, problem-solving, and leadership qualities.
    • Impact Measurement: Focus on the outcomes of their work, such as system reliability, user satisfaction, or business growth.
    • Skill Development: Assess how engineers are improving their technical and interpersonal skills over time.
    • Team Contributions: Evaluate their role in fostering a healthy, productive, and innovative team culture.

    Conclusion

    Code commits and code reviews are easy metrics to track, but they’re also misleading. Evaluating software engineers based on these numbers alone risks promoting harmful behaviors, overlooking meaningful contributions, and devaluing the qualities that drive long-term success. Organizations must adopt more thoughtful and comprehensive approaches to recognize the true value engineers bring to their teams and projects.

    Join the Conversation

    What are your thoughts on metrics for evaluating software engineers? Have you encountered challenges with metrics like code commits and reviews, or have you implemented better alternatives? Share your experiences and insights in the comments below—let’s rethink how we measure success in software engineering together!

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Lines of Code

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Lines of Code

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    Why Lines of Code (LOC) is a Misleading Metric for Software Engineers

    When measuring the productivity of software engineers, one metric often comes up: Lines of Code (LOC). At first glance, LOC seems like a straightforward and logical measure—after all, more code must mean more work done, right? However, relying on LOC as a performance metric can lead to misleading conclusions and counterproductive behaviors that ultimately harm both engineers and organizations.

    Drawing from my experience in software development and leadership, I’ve seen firsthand how this metric can misrepresent an engineer’s true contributions. Let’s dive into why LOC is a flawed measure and explore better alternatives.


    The Allure of Lines of Code

    LOC’s popularity stems from its simplicity. It provides:

    • Quantifiable Output: LOC offers a clear, numerical value that’s easy to track and report.
    • Perceived Productivity: More lines written may appear to indicate greater effort or progress.
    • Historical Context: For decades, LOC has been used to estimate project size, effort, and complexity.

    While these qualities make LOC appealing, they don’t capture the full picture of an engineer’s productivity or value.


    Why Lines of Code is a Bad Metric

    1. Quantity Over Quality

    Measuring productivity by LOC can incentivize engineers to write more code than necessary. Instead of crafting elegant, efficient solutions, they might:

    • Write verbose or redundant code to inflate the LOC count.
    • Avoid simplifying or refactoring code, even when it’s beneficial for the project.

    Ultimately, prioritizing quantity over quality leads to bloated codebases that are harder to maintain and scale.

    2. Ignores Problem Complexity

    Not all lines of code are created equal. Some problems require more thought and fewer lines to solve effectively. For example:

    • A single, well-designed line of code can accomplish the same task as 20 poorly structured lines.
    • Engineers working on complex algorithms or debugging might produce fewer lines but contribute far more value.

    LOC fails to account for the effort and skill involved in creating maintainable, high-quality solutions.

    3. Not Aligned with Value

    More code doesn’t necessarily mean more value for the business or end users. In fact, a large codebase can introduce:

    • Higher maintenance costs: More code means more potential for bugs and longer onboarding times for new engineers.
    • Reduced performance: Overly complex systems can slow down development and deployment.
    • Wasted effort: If the added lines don’t align with user needs or business goals, they don’t deliver real value.

    A Better Approach: Value-Based Metrics

    To truly measure an engineer’s contribution, focus on metrics that emphasize value and outcomes rather than raw output. Here are some alternatives:

    1. Completed Features

    Track the number and quality of features delivered. This measures tangible contributions that align with business goals and user needs.

    2. Defect Density

    Monitor the number of bugs introduced relative to the amount of code written. High-quality code should minimize defects and improve system reliability.

    3. Cycle Time

    Measure how quickly a task moves from development to deployment. Shorter cycle times indicate streamlined processes and efficient workflows.

    4. Customer Satisfaction

    Gauge how well the delivered features meet user expectations. Metrics like Net Promoter Score (NPS) or direct user feedback can provide valuable insights.


    How Leaders Can Shift the Focus

    As leaders, we have the responsibility to move beyond superficial metrics like LOC and create an environment that values meaningful contributions. Here’s how:

    • Educate Stakeholders: Help managers and executives understand why LOC is misleading and advocate for better metrics.
    • Prioritize Code Quality: Encourage practices like code reviews, refactoring, and automated testing to ensure maintainable solutions.
    • Reward Impact: Recognize engineers for solving problems, improving performance, and delivering value—not just for writing more code.

    Conclusion

    Lines of Code may seem like an easy way to measure productivity, but it’s a flawed metric that doesn’t reflect the true value of an engineer’s work. By focusing on metrics that prioritize outcomes, quality, and user satisfaction, we can foster a culture that rewards meaningful contributions and drives long-term success.

    Let’s rethink how we measure success in software engineering. What metrics have you found to be effective in your teams? Share your thoughts in the comments below!

  • Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers – Intro

    Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers – Intro

    In the fast-paced world of software development, metrics are indispensable. They help us gauge progress, evaluate performance, and make informed decisions. But not all metrics are created equal. Some metrics, while widely used, can do more harm than good when misunderstood or misapplied.

    This blog series aims to shine a light on the key metrics that are commonly used to measure software engineers but often fall short of capturing true performance and impact. These metrics, while popular, can lead to misleading conclusions, encourage counterproductive behaviors, and ultimately hinder the growth and success of both individuals and teams.


    Why Talk About Bad Metrics?

    Metrics are powerful tools—but only when used wisely. A poorly chosen metric can:

    • Incentivize the wrong behaviors, such as prioritizing quantity over quality.
    • Create unnecessary pressure on developers, leading to burnout.
    • Obscure the real value that software engineers bring to their teams and organizations.

    By understanding the limitations of these metrics, we can move toward a more meaningful and effective way of measuring success in software engineering.


    What to Expect

    In this series, we’ll take a closer look at some of the most commonly misused metrics, including:

    • Lines of Code (LOC): Why more code doesn’t always mean more progress.
    • Velocity: The dangers of equating story points with performance.
    • Bug Count: How focusing on the number of bugs can penalize transparency.
    • Utilization Rate: Why 100% utilization isn’t synonymous with productivity.
    • Commit Frequency: The pitfalls of equating activity with meaningful work.
    • Code Coverage: When high percentages can give a false sense of security.
    • Sprint Burndown Charts: How over-reliance on these charts can create unnecessary stress.

    Each post will explore:

    1. Why the metric became popular.
    2. The potential pitfalls and unintended consequences of using it.
    3. Better alternatives for evaluating software engineering performance and team success.

    Let’s Rethink How We Measure Success

    The ultimate goal of this series is to inspire a shift in how we think about performance metrics. Instead of relying on superficial indicators, we’ll explore ways to focus on what truly matters: delivering value, fostering innovation, and empowering teams to succeed.

    Join us as we challenge the status quo, debunk misconceptions, and pave the way for smarter, more meaningful metrics in software engineering.

    Series: 

    1. Lines of Code (LOC): Why It’s Not the Best Metric.
    2. Code Commits and Code Reviews
    3. Utilization Rate
    4. Velocity

  • Welcome to Leadership Loop: A Journey through Leadership, Innovation, and Technology

    Welcome to Leadership Loop: A Journey through Leadership, Innovation, and Technology

    Hello, and welcome to Leadership Loop blog! My name is Siva Jagadeesan, and I’m thrilled to launch this blog where I’ll share insights, experiences, and actionable advice on topics close to my heart: software development, management, leadership, team dynamics, and navigating workplace challenges.

    A Little About Me

    With over 25 years of experience in the tech industry, I’ve had the privilege of working in various roles that span the spectrum of technology and leadership. From founding startups to driving large-scale innovations at companies like Amazon, my journey has been defined by a passion for building not just great products but also strong, empowered teams.

    Here’s a quick overview of my professional background:

    • Leadership Expertise: I’ve led diverse teams of engineers, product managers, and data scientists, helping them thrive and achieve their full potential.
    • Entrepreneurial Experience: As a founding member in multiple ventures, I’ve worked on everything from product strategy to scaling operations.
    • Continuous Improvement Advocate: I’m committed to refining workflows and finding innovative solutions to ensure teams deliver value faster and more effectively.
    • Core Beliefs: I believe that with the right people, technology, and culture, organizations can achieve extraordinary results.

    Why Leadership Loop?

    After an incredible chapter at Amazon, where I contributed to building and scaling Amazon Publisher Services (APS) into the #1 SSP, I find myself at a pivotal moment of reflection and renewal. My time at Amazon was marked by extraordinary growth, both professionally and personally. Collaborating with talented colleagues on innovative projects has equipped me with invaluable lessons about leadership, scaling teams, and navigating the complexities of technology-driven organizations.

    As I close this chapter, I am inspired to share my journey and insights with others. This blog represents a new beginning—an opportunity to distill years of experience into actionable advice and meaningful conversations.

    This is my way of:

    Giving Back: Helping others navigate their leadership journeys by sharing lessons I’ve learned along the way.

    Sharing Knowledge: Offering practical tips and strategies for leadership and team management.

    Fostering Conversations: Encouraging open dialogue about the triumphs and struggles of managing people and technology.

    What to Expect from Leadership Loop

    Here are some of the topics you can look forward to:

    • Software Management: Techniques for scaling teams, improving workflows, and ensuring high-quality software delivery.
    • Leadership and People Management: Insights into building trust, inspiring teams, and addressing challenges like conflict and low morale.
    • Navigating Workplace Dynamics: Practical advice for managing office dynamics with integrity and fostering collaboration in competitive environments.
    • Personal Growth as a Leader: Reflections on overcoming challenges, embracing failures, and continuously evolving.

    Each post will combine real-world examples, actionable takeaways, and honest reflections from my career. Whether you’re a first-time manager, an experienced leader, or simply curious about the intersection of technology and leadership, Leadership Loop has something for you.

    Join Me on This Journey

    The beauty of leadership lies in its complexity, and I believe the best learning happens when we share our experiences. I encourage you to engage with the content—leave comments, share your thoughts, or ask questions. This is not just my blog; it’s a platform for a community of like-minded individuals passionate about leadership and growth.

    Let’s explore the endless loop of learning, leading, and evolving—together.

    Warm regards,
    Siva Jagadeesan