Tag: Software

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Velocity

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Velocity

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    Velocity is often used in the world of software development and is often a go-to team performance metric. In its basic sense, velocity calculates the number of story points completed in a sprint. As a planning tool, it can be quite useful. While it can be a useful planning tool, relying on velocity as a performance metric for individual engineers or teams can introduce several pitfalls. These pitfalls can do more harm than good.

    The Origins of Velocity

    Velocity was initially intended to be a team-level metric, used within Agile frameworks like Scrum. This allows teams to gain insight into how much work they can complete in a sprint and to calibrate their workload accordingly. However, the simplicity of the metric encourages organizations to abuse it as a measure of productivity.

    Pitfall 1: Encouraging Quantity over Quality

    While velocity can be a helpful performance metric, it can lead to engineers focusing more on finishing more story points than on doing quality work. Such situations will lead to technical debt, and general deterioration of code quality. Engineers may also stop taking on complicated but important work that doesn’t produce a lot of story points.

    Pitfall 2: Misaligned Incentives

    The definition of the story point can vary greatly among teams when it comes to complexity, effort or risk. When velocity is compared across teams or individuals it leads to unfair comparison and misaligned incentives. One team may exaggerate story points to create a “fake” velocity, another one may downplay story points to maintain consistency.

    Pitfall 3: Ignoring Contextual Factors

    Velocity does not factor in external elements like:

    • Ad-hoc bugs or incidents that can’t wait.
    • Collaboration around non-ticketed work, such as mentoring, working on a whiteboard, or writing documentation.
    • Variability in tasks, where a single, highly impactful feature might take more time but doesn’t align with the velocity model.

    These contextual nuances are crucial to understanding an engineer’s true contribution but are invisible to these velocity metrics.

    Pitfall 4: Fostering a Culture of Competition

    When velocity is tied to individual performance evaluations, it can foster unhealthy competition. Engineers may prioritize individual tasks over collaboration, undermining team cohesion. This siloed mindset can lead to inefficiencies and a lack of shared ownership.

    Conclusion

    Velocity was never intended to measure individual performance, and yet this is how it is most often used. When misused, this can cause bad habits, poor quality of code, and ultimately lose focus on customer value. By rethinking metrics and prioritizing a holistic view of contributions, organizations can foster a healthier, more productive software engineering culture. Let’s move beyond velocity and measure what truly matters.

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Code Commits and Code Reviews

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Code Commits and Code Reviews

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    In the world of software development, the performance of engineers often becomes a topic of discussion. Managers and organizations seek tangible ways to evaluate contributions, and metrics like code commits and code reviews are frequently considered. However, relying on these metrics to assess software engineers’ performance is not only reductive but also harmful to team dynamics and long-term project success.

    Why Code Commits and Code Reviews are Bad Metrics

    1. Code Quantity Does Not Equate to Code Quality

    Using the number of commits as a measure of productivity incentivizes quantity over quality. Engineers may feel pressured to create many small commits or inflate their contribution with unnecessary changes to appear productive. This can lead to bloated codebases, technical debt, and reduced focus on writing clean, maintainable code.

    Moreover, not all engineering tasks require the same amount of code. Refactoring, debugging, and optimizing existing code often result in fewer commits but are critical for the health and scalability of a project.

    2. Code Reviews Reflect Collaboration, Not Individual Skill

    While code reviews are essential for maintaining code quality and knowledge sharing, the number of reviews performed or comments made does not directly correlate with an engineer’s skill. Some engineers may excel at reviewing complex algorithms, while others may focus on broader architectural decisions or mentorship through reviews.

    Incentivizing reviews as a metric can lead to superficial or excessive feedback that adds little value. Effective code reviews prioritize quality insights, not the volume of comments.

    3. Context Matters—And Metrics Lack It

    Metrics like commits and reviews fail to capture the context of an engineer’s contributions. For example:

    • Role Diversity: Senior engineers often spend more time mentoring, designing architectures, or aligning with stakeholders, leading to fewer commits.
    • Project Complexity: Some projects require deep problem-solving and innovation, which may not result in immediate code output.
    • Cross-Functional Contributions: Engineers may contribute through documentation, automation, or process improvements—activities that are invaluable but not directly tied to code metrics.

    4. Encourages Unhealthy Competition

    Metrics-based evaluation fosters competition rather than collaboration. Engineers may prioritize their individual output over helping teammates or addressing broader team goals. This undermines the collaborative spirit essential for agile and DevOps practices.

    5. Ignores User Impact

    The ultimate goal of software engineering is to deliver value to users. Metrics like commits and reviews focus on process rather than outcomes. An engineer who designs a feature that significantly improves user satisfaction or revenue may have a smaller number of commits compared to someone resolving routine bugs but has made a far greater impact.

    A Better Approach to Evaluation

    Instead of relying on shallow metrics, organizations should consider holistic and qualitative methods to evaluate software engineers, such as:

    • Peer and Manager Feedback: Collect feedback on collaboration, problem-solving, and leadership qualities.
    • Impact Measurement: Focus on the outcomes of their work, such as system reliability, user satisfaction, or business growth.
    • Skill Development: Assess how engineers are improving their technical and interpersonal skills over time.
    • Team Contributions: Evaluate their role in fostering a healthy, productive, and innovative team culture.

    Conclusion

    Code commits and code reviews are easy metrics to track, but they’re also misleading. Evaluating software engineers based on these numbers alone risks promoting harmful behaviors, overlooking meaningful contributions, and devaluing the qualities that drive long-term success. Organizations must adopt more thoughtful and comprehensive approaches to recognize the true value engineers bring to their teams and projects.

    Join the Conversation

    What are your thoughts on metrics for evaluating software engineers? Have you encountered challenges with metrics like code commits and reviews, or have you implemented better alternatives? Share your experiences and insights in the comments below—let’s rethink how we measure success in software engineering together!

  • Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Lines of Code

    Rethinking Metrics : Avoid These Common Pitfalls in Measuring Software Engineers – Lines of Code

    This blog post is part of the series – Rethinking Metrics: Avoid These Common Pitfalls in Measuring Software Engineers.

    Why Lines of Code (LOC) is a Misleading Metric for Software Engineers

    When measuring the productivity of software engineers, one metric often comes up: Lines of Code (LOC). At first glance, LOC seems like a straightforward and logical measure—after all, more code must mean more work done, right? However, relying on LOC as a performance metric can lead to misleading conclusions and counterproductive behaviors that ultimately harm both engineers and organizations.

    Drawing from my experience in software development and leadership, I’ve seen firsthand how this metric can misrepresent an engineer’s true contributions. Let’s dive into why LOC is a flawed measure and explore better alternatives.


    The Allure of Lines of Code

    LOC’s popularity stems from its simplicity. It provides:

    • Quantifiable Output: LOC offers a clear, numerical value that’s easy to track and report.
    • Perceived Productivity: More lines written may appear to indicate greater effort or progress.
    • Historical Context: For decades, LOC has been used to estimate project size, effort, and complexity.

    While these qualities make LOC appealing, they don’t capture the full picture of an engineer’s productivity or value.


    Why Lines of Code is a Bad Metric

    1. Quantity Over Quality

    Measuring productivity by LOC can incentivize engineers to write more code than necessary. Instead of crafting elegant, efficient solutions, they might:

    • Write verbose or redundant code to inflate the LOC count.
    • Avoid simplifying or refactoring code, even when it’s beneficial for the project.

    Ultimately, prioritizing quantity over quality leads to bloated codebases that are harder to maintain and scale.

    2. Ignores Problem Complexity

    Not all lines of code are created equal. Some problems require more thought and fewer lines to solve effectively. For example:

    • A single, well-designed line of code can accomplish the same task as 20 poorly structured lines.
    • Engineers working on complex algorithms or debugging might produce fewer lines but contribute far more value.

    LOC fails to account for the effort and skill involved in creating maintainable, high-quality solutions.

    3. Not Aligned with Value

    More code doesn’t necessarily mean more value for the business or end users. In fact, a large codebase can introduce:

    • Higher maintenance costs: More code means more potential for bugs and longer onboarding times for new engineers.
    • Reduced performance: Overly complex systems can slow down development and deployment.
    • Wasted effort: If the added lines don’t align with user needs or business goals, they don’t deliver real value.

    A Better Approach: Value-Based Metrics

    To truly measure an engineer’s contribution, focus on metrics that emphasize value and outcomes rather than raw output. Here are some alternatives:

    1. Completed Features

    Track the number and quality of features delivered. This measures tangible contributions that align with business goals and user needs.

    2. Defect Density

    Monitor the number of bugs introduced relative to the amount of code written. High-quality code should minimize defects and improve system reliability.

    3. Cycle Time

    Measure how quickly a task moves from development to deployment. Shorter cycle times indicate streamlined processes and efficient workflows.

    4. Customer Satisfaction

    Gauge how well the delivered features meet user expectations. Metrics like Net Promoter Score (NPS) or direct user feedback can provide valuable insights.


    How Leaders Can Shift the Focus

    As leaders, we have the responsibility to move beyond superficial metrics like LOC and create an environment that values meaningful contributions. Here’s how:

    • Educate Stakeholders: Help managers and executives understand why LOC is misleading and advocate for better metrics.
    • Prioritize Code Quality: Encourage practices like code reviews, refactoring, and automated testing to ensure maintainable solutions.
    • Reward Impact: Recognize engineers for solving problems, improving performance, and delivering value—not just for writing more code.

    Conclusion

    Lines of Code may seem like an easy way to measure productivity, but it’s a flawed metric that doesn’t reflect the true value of an engineer’s work. By focusing on metrics that prioritize outcomes, quality, and user satisfaction, we can foster a culture that rewards meaningful contributions and drives long-term success.

    Let’s rethink how we measure success in software engineering. What metrics have you found to be effective in your teams? Share your thoughts in the comments below!

  • Welcome to Leadership Loop: A Journey through Leadership, Innovation, and Technology

    Welcome to Leadership Loop: A Journey through Leadership, Innovation, and Technology

    Hello, and welcome to Leadership Loop blog! My name is Siva Jagadeesan, and I’m thrilled to launch this blog where I’ll share insights, experiences, and actionable advice on topics close to my heart: software development, management, leadership, team dynamics, and navigating workplace challenges.

    A Little About Me

    With over 25 years of experience in the tech industry, I’ve had the privilege of working in various roles that span the spectrum of technology and leadership. From founding startups to driving large-scale innovations at companies like Amazon, my journey has been defined by a passion for building not just great products but also strong, empowered teams.

    Here’s a quick overview of my professional background:

    • Leadership Expertise: I’ve led diverse teams of engineers, product managers, and data scientists, helping them thrive and achieve their full potential.
    • Entrepreneurial Experience: As a founding member in multiple ventures, I’ve worked on everything from product strategy to scaling operations.
    • Continuous Improvement Advocate: I’m committed to refining workflows and finding innovative solutions to ensure teams deliver value faster and more effectively.
    • Core Beliefs: I believe that with the right people, technology, and culture, organizations can achieve extraordinary results.

    Why Leadership Loop?

    After an incredible chapter at Amazon, where I contributed to building and scaling Amazon Publisher Services (APS) into the #1 SSP, I find myself at a pivotal moment of reflection and renewal. My time at Amazon was marked by extraordinary growth, both professionally and personally. Collaborating with talented colleagues on innovative projects has equipped me with invaluable lessons about leadership, scaling teams, and navigating the complexities of technology-driven organizations.

    As I close this chapter, I am inspired to share my journey and insights with others. This blog represents a new beginning—an opportunity to distill years of experience into actionable advice and meaningful conversations.

    This is my way of:

    Giving Back: Helping others navigate their leadership journeys by sharing lessons I’ve learned along the way.

    Sharing Knowledge: Offering practical tips and strategies for leadership and team management.

    Fostering Conversations: Encouraging open dialogue about the triumphs and struggles of managing people and technology.

    What to Expect from Leadership Loop

    Here are some of the topics you can look forward to:

    • Software Management: Techniques for scaling teams, improving workflows, and ensuring high-quality software delivery.
    • Leadership and People Management: Insights into building trust, inspiring teams, and addressing challenges like conflict and low morale.
    • Navigating Workplace Dynamics: Practical advice for managing office dynamics with integrity and fostering collaboration in competitive environments.
    • Personal Growth as a Leader: Reflections on overcoming challenges, embracing failures, and continuously evolving.

    Each post will combine real-world examples, actionable takeaways, and honest reflections from my career. Whether you’re a first-time manager, an experienced leader, or simply curious about the intersection of technology and leadership, Leadership Loop has something for you.

    Join Me on This Journey

    The beauty of leadership lies in its complexity, and I believe the best learning happens when we share our experiences. I encourage you to engage with the content—leave comments, share your thoughts, or ask questions. This is not just my blog; it’s a platform for a community of like-minded individuals passionate about leadership and growth.

    Let’s explore the endless loop of learning, leading, and evolving—together.

    Warm regards,
    Siva Jagadeesan