Software Testing Basics

Explore top LinkedIn content from expert professionals.

  • View profile for John K.

    Staff Software Engineer at Meta

    7,809 followers

    As a mobile engineer, I try to break everything I build. Let me explain. One of the most common things I see from junior engineers is that they may only test the "happy path" (Perfect, ideal user flow) But guess what—no user will ever use our apps the way we think they will. There are also real-world environmental differences that affect your app: 📡 Network conditions – slow connections, sudden dropouts 🔒 Permission settings – missing access to camera, location, notifications 📱 Device limitations – low-end hardware, limited memory, battery saver mode 🌍 Localization factors – RTL settings, different fonts, accessibility tools Of course, we can't QA these situations for all of those without some automation But at least try to break your app. 👊 Rapid-fire testing tactics: ✅ Swipe through flows quickly ✅ Tap on the same target multiple times (Do you need a debouncer?) ✅ Background and foreground your app rapidly ✅ Rotate your phone at key moments ✅ Test network interruptions In 5 minutes you can go through this for all your PRs. You may think "in reality this barely happens." Well, when you have billions of users using your app Even if only 0.01% of your users actually face this, that's more users than almost any app's daily active users. Remember: If you don't break it, your users will. #softwaredevelopment #engineering #bestpractices #productdevelopment

  • View profile for Ben F.

    Join us for a workshop on AI in QA! LINKS IN ABOUT

    13,443 followers

    One of the most impactful changes I've seen in quality happens when you implement one specific process: a 30-minute QA-Dev sync meeting for each feature before coding begins to discuss the implementation and testing strategy. When I first bring this up with a client, I get predictable objections: Developers don’t want to "waste" their time. Leadership doesn’t want to "lose" development time. Testing is necessary anyway, so why discuss it? Our QA doesn’t couldn't possibly understand code. The reality is that the impact of effective testing can be remarkably hard for an organization to see. When it goes smoothly, nothing happens — no fires to put out, no production issues. As a result, meetings like this can be difficult for leadership to measure or justify with a clear metric. What confuses me personally is why most engineering leaders say they understand the testing pyramid, yet they often break it in two, essentially creating two separate pyramids. Instead, you should have a collaborative session where QA and Dev discuss the entire testing pyramid — from unit tests to integration and end-to-end tests — to ensure comprehensive and efficient coverage. Talking through what constitutes effective unit and integration tests dramatically affects manual and end-to-end testing. Additionally, I'm continually impressed by how a QA who doesn’t "understand" full-stack development can still call out issues like missing validations, test cases, and edge cases in a method. QA/Devs should also evaluate whether any refactoring is needed, identify potential impacts on existing functionality, and clarify ambiguous requirements early. The outcome is a clear test plan, agreement on automated and manual checks, and a shared understanding that reduces late-stage bugs and improves overall product quality. #quality #testing #software

  • 1. Describe the testing - what it covers, does not cover, why this testing is happening 2. Provide evidence of that testing - we need this for analysis and understanding 3. Describe problems, issues, risks discovered - this description includes as best you can evidence for how to reproduce, and as much data necessary to understand the problem as you can offer - the relationship of this information to the business and customer needs are very important 4. Describe the status of the testing - how is it going, what direction should it go, is anything blocking it, what plans do you have next The above report is the whole point of testing. We work to produce this information to whomever needs it to make decisions and take action. If this report is not delivered, and if it is missing any of the pieces above, it is incomplete. Maybe not an entire failure, but incomplete regardless. The size of the report, the detail level varies based on situation and audience. Even for a set of unit tests a developer writes for themselves, there is something of each one of these points in that report. Even the "why" - unit test naming conventions are usually about encapsulating what and why together in a fast/short way. I spent an hour or two recently evaluating a web site testing report delivered by a high-profile AI testing tool. The report failed on every single one of the points above. Were the tool a human, I would have fired them and gone back to the talent pool to find a different tester. When I think about gaps I have seen between other disciplines and testers - those cases where the other people say, "We don't know what the testers are doing," it is in every case that I have seen that both sides of that relationship do not realize they need to see the four things listed above. Deliver those four things, and it will make sense. Leave them out, and you are operating blindly. #softwaretesting #softwaredevelopment

  • View profile for Artem Golubev

    Co-Founder and CEO of testRigor, the #1 Generative AI-based Test Automation Tool

    34,689 followers

    𝐐𝐀 𝐋𝐞𝐚𝐝𝐞𝐫𝐬: is your exploratory testing catching every flaw? Your current method could be letting critical issues slip by… Here's the reality: without a clear plan, exploratory testing can quickly become a chaotic search where key areas go unexamined. When there’s no defined objective, scope, or time limit, important issues may remain hidden, all while you’re under pressure to deliver fast. A simple testing guide, otherwise known as a test charter, can change that. A test charter is essentially a brief plan for a testing session. It lays out what you need to explore, pinpoints which features demand your focus, and sets a realistic timeframe for your efforts. This guide is especially useful in exploratory testing where the absence of strict instructions might otherwise lead you astray By defining a clear objective, you know exactly what you’re aiming to test. Establishing a precise scope ensures that you concentrate on the parts of your application that matter most, rather than drifting into less relevant areas. A set time limit helps keep your session efficient and prevents the process from becoming an endless search. Moreover, a test charter outlines your testing approach. Whether you’re examining usability or hunting for unexpected errors, having this plan creates a balance between creative exploration and the structure necessary to uncover hidden flaws. It prevents the common pitfall of missing vital issues simply because the testing session was too unfocused. If your testing sessions feel scattered and you worry that something vital is being overlooked, it’s time to rethink your approach. Integrating a test charter into your process can bring the order you need while still allowing for the flexibility to explore and discover. #TestCharter #ExploratoryTesting #QualityAssurance

  • View profile for Lamhot Siagian

    SDET Expert• PhD Student • Data Science & AI/ML • Founder Software Test Architect • iOS & Android Mobile & Device Testing • Playwright | Cypress | Appium • CI/CD (AWS & GCP) • Open to Work. Green Card

    22,909 followers

    WWDC 2025: What AI Testers Need to Know Apple’s WWDC 2025 is a turning point for software testing, especially for those of us working with AI, mobile, and automation. From on-device large language models to AI-generated code and a reimagined UI, everything about testing just got more complex (and more exciting). Here’s why this matters for us as AI / Mobile Test Engineers: ✅ On‑Device Foundation Models Apple now allows developers to integrate large language models directly into their apps—no server required. This brings new testing challenges: • Offline performance • Multilingual output validation • Privacy and hallucination safeguards ✅ AI-Assisted Development in Xcode 26 Xcode can now generate unit tests and suggest code using embedded AI. That means testers must: • Review AI-generated logic • Identify missed edge cases • Validate test reliability over time ✅ Liquid Glass UI Design The new fluid interface system introduces dynamic layouts that require: • Rigorous regression testing • Accessibility audits • Smooth rendering checks across devices ✅ AI Agents in System Apps Apple Intelligence is now integrated into apps like Messages, Phone, and Notes. Testers must ensure: • Real-time translation accuracy • Smart call screening behavior • Personalized coaching doesn’t leak data or fail unpredictably ✅ AI + Automation in Shortcuts Workflows powered by AI bring powerful automations—but with them come new risks: • Context-sensitive failures • Broken triggers • State inconsistency under various user conditions 🧠 What This Means for Testers ✅ Validate local model safety, hallucination handling, and fallback behavior ✅ Ensure no data leakage—even during AI or Siri integration ✅ Regression-test the dynamic Liquid Glass UI across platforms ✅ Build automation that adapts to AI-influenced outputs ✅ Shift your mindset: AI isn’t just part of the app—it’s now the app logic ⸻ 🎥 6 Must-Watch WWDC 2025 Videos & Resources 1. Explore prompt design & safety for on‑device foundation models (Apple Developer) https://lnkd.in/gvARsQMp 2. Deep dive into the Foundation Models framework (Apple Developer) https://lnkd.in/g9XksYMN 3. Discover machine learning & AI frameworks on Apple platforms (Apple Developer) https://lnkd.in/gf3_CgfS 4. WWDC 2025 – iOS 26, New UI, Apple Intelligence + More (YouTube) https://lnkd.in/gz8kYEmJ 5. WWDC 2025: Everything Revealed in 9 Minutes https://lnkd.in/gUfu6Ute 6. Top 10 Biggest iOS 26 Features From WWDC 2025 https://lnkd.in/gcgvzdXq How are you adapting your test strategy in response to this wave of AI innovation? #WWDC2025 #AITesting #TestAutomation #OnDeviceAI #MobileTesting #SDET #Xcode26 #FoundationModels #AppleIntelligence #PrivacyByDesign #LiquidGlass #AIValidation #iOS26

  • View profile for Bharti Garg

    Co-Founder & Chief Product Officer @FrugalTesting | Radiating the best QA methodologies across the globe 🌏| 2 decades of diverse experience | Helped over 150+ clients across verticals

    27,675 followers

    I’ve always felt that the best QA doesn’t chase bugs, because it removes friction at the root. And two areas where friction shows up the most? 📍 Location-based flows 📲 OTP verification They seem simple. But when you're testing them across devices, SIM cards, networks, and GPS signals, it quickly becomes chaotic. That’s why, over time, my team and I built a simple principle in our QA workflow: Test core flows - without relying on real devices. I still remember when validating an OTP meant waiting endlessly for a single SMS… Or walking around, hoping the GPS would finally lock in. Eventually, we asked: “How can we make this easier - not just for users, but for our own teams?” That shift changed more than our tools. It shaped how we test. We built environments that could simulate the real world So our teams could keep building smooth experiences, without being blocked by SIM cards or satellite signals. Because when your app says “OTP failed” or “location not found,” the user doesn’t care about the backend. They just want it to work. Smoothly. The first time. 💬 How are you testing geo + auth flows without real devices? Would love to hear your approach. #QA #TestAutomation #MobileTesting #FrugalTesting #bhartigarg #OTPTesting  #SimulatedTesting

  • Ever feel like you're speaking a different language than your colleagues? That's how our QA team felt when I worked at Blizzard. Company policy: Keep QA separate from dev teams. Reality: Constant miscommunication and frustration. Riot saw an opportunity. Why not embed QA directly with developers? → Started small: Invited key QA members to our daily standups → Result: Instant improvement in bug reporting and fixes But it wasn't smooth sailing: → Pushback from management → Developers skeptical of "outsiders" → QA unsure of their new role Persistence paid off. After many months: → Faster bug fixes → Higher quality releases → Happier teams all around This experiment became the foundation for a new company-wide policy. Now, embedded QA is standard practice not just at Riot, but across the industry. Lesson learned: Sometimes the best solutions come from breaking the rules (respectfully). 2 Questions I find useful to ask (regularly): 1) What unspoken policy is holding our team back? 2) How can we bridge gaps between departments? QA are the unsung heroes of game development. P.S. Tell the QA in your life you appreciate them today.

  • View profile for Benjamin Carcich

    Helping Producers in Games Build Better Games. Host and Publisher of the Building Better Games Podcast and Newsletter. Follow me for posts on leadership in game development. God bless!

    11,219 followers

    New Game Production Q&A today, here’s a question I didn’t get to answer from the last one. From Julian: “What strategies have you found for bringing QA into the active development cycle, when it's traditionally been decoupled and viewed as an "end of development" function?” You know, the biggest thing I see keeping QA as the ‘last in the pipeline’ discipline is that we often silo them away and treat them as if that ‘end of development’ function is all they can do. It’s a low-efficacy view of QA, plus a studio design that even in the org chart has them apart from everyone else. The first thing I’d want to do is start breaking QA analysts out of their silo and embedding them on teams. I’m not kidding, even if you just do that, if you’ve hired QA that can take initiative and care about the quality of the player experience, they will go into teams and meetings and find ways to add value. Next, you want to rethink the nature of QA. If you view them as ‘people that find and report bugs’, the view that their role is all bunched up at the end when the game is ‘done’ can almost sort’ve be badly rationalized. But if you view them as individuals on teams who are responsible for making sure low quality experiences - and I mean the engaged experience of the player, not graphical fidelity or refined UI! - don’t get to players, and as people who are to maintain a deep connection to the quality of the products they are helping deliver, you can start understanding just how valuable the QA function can be. Even at a ‘dealing with bugs’ level, a truth discovered in engineering (and is also true in game dev) is that your time to resolve a bug is directly related to the effort required to solve it. If a QA analyst on a team finds a bug in 6 hours working alongside a designer and engineer on a new bit of gameplay or a feature, odds are that bug will be resolved in a fraction of the time it would have taken someone to puzzle their way through the code and lua 6 months from now while we’re all in a panic trying to ship. Having QA embedded on teams, creating test plans, working to make sure everyone is following your definition of done, advocating for stable builds and regular playtests, and ultimately pushing for whatever is produced to actually land with your audience is an end to end function. It’s ok to put a lot of responsibility on your QA. They’ve worked in a world where all they do is submit reports. They want to be involved. Seriously, in my years of game dev I’ve rarely seen a discipline rise to the challenges put before them more than QA. Massive respect. They can be a much bigger asset to studios than they typically are. Stop wasting the energy and awareness they bring! #gameproduction #gamedevelopment #gameindustry #qaanalyst

  • View profile for Aston Cook

    Senior QA Automation Engineer | Playwright, Cypress & Selenium | API & E2E Testing | CI/CD & Scalable Frameworks

    5,800 followers

    The secret to finding more bugs that no one talks about. Most testers rely on test cases to find bugs. But here’s the problem: Test cases only find expected issues. The real trick? Think like a user, not a tester. Here’s how: Break the expected flow – Users don’t always follow the “happy path.” Try entering invalid data, refreshing at the wrong time, or switching devices mid-action. Test beyond the UI – Bugs hide in APIs, databases, and logs. A UI might look fine while the backend is failing. Observe, don’t just execute – Instead of rushing through test steps, watch for UI glitches, slow load times, or unexpected behavior. Use exploratory testing techniques – Take time to think beyond requirements. Ask “What happens if I do this?” instead of just following a script. The best testers don’t just execute tests. They explore, observe, and question.

  • View profile for Caleb Crandall

    Context-driven software engineer in test | Scrum master

    2,486 followers

    What do testers do when development is not complete? While you _could_ spend a bunch of time writing "test cases" and a "test plan", I think that's often a wasteful use of time. Instead: * You could participate in requirement and design discussions and reviews. * You could do some lightweight test design, using tools like mind maps, test charters, etc. (jotting down high-level test ideas that will help guide the actual testing without getting too prescriptive and time-consuming like highly scripted "test cases") * You could even start testing by working with the developer to get access to the partial implementation, keeping in mind that some things may not be "done" and delivering feedback more informally than you might for a bug in finished code. Depending on how the team works this could be the code on a developer's personal branch, or turning on a particular "feature flag", etc. * You might continue doing some deeper testing on other recently implemented work, or adjacent areas. Testing doesn't have to happen only "at the end" of a user story, feature, release, or project. I prefer moving away from viewing the process as "development implements a thing, and then hands it off to a 'QA' team when they're done" and instead embedding testers and developers in the same team where they're constantly interacting and collaborating. There are still "handoffs", but they're within the same team (hint, they also happen even if you have a team of just developers), and they're smaller and more frequent so everyone tends to retain more context, in addition to not having people waiting around to have a "finished" thing handed to them after days or weeks of effort without being able to get eyes on it or provide feedback in the interim. #softwaretesting #criticalthinking #agiletesting

Explore categories