BSides SLC 2025 Closing Ceremony
This video is a closing ceremony for the BSides SLC 2025 conference. It includes a summary of the event's focus points, sponsor acknowledgments, and the announcement of winners for the Capture The Flag (CTF) and social media contests. The speaker also briefly demonstrates a Python script used for random winner selection.
Randomness in Production: Why Your Winner Selection Script is Probably Broken
TLDR: A simple Python script used for raffle winner selection at a conference highlights a common, dangerous pitfall in software development: using non-cryptographic random number generators for security-sensitive operations. While the script was intended for a low-stakes event, the underlying logic flaw is identical to those found in password reset tokens, session IDs, and API keys. Developers must stop relying on standard pseudo-random number generators when the outcome needs to be unpredictable.
Security researchers often spend their time hunting for complex memory corruption bugs or intricate logic flaws in authentication flows. Yet, some of the most persistent vulnerabilities in production environments stem from a fundamental misunderstanding of how computers generate randomness. During the closing ceremony of BSides SLC 2025, a brief demonstration of a Python script used for raffle winner selection provided a perfect, real-world example of why developers should never use standard libraries for tasks requiring genuine unpredictability.
The Illusion of Randomness
The script in question utilized the standard random module in Python to select winners from a list of attendees. On the surface, this seems harmless. For a conference raffle, the stakes are low. However, the code structure revealed a classic mistake: relying on random.system_random() or similar functions without considering the underlying entropy source or the predictability of the seed.
In Python, the random module is explicitly documented as not being suitable for security purposes. The official Python documentation warns that it should not be used for anything related to security or cryptography. When a developer uses random.random() or random.choice(), they are interacting with the Mersenne Twister algorithm. This is a pseudo-random number generator (PRNG) that is entirely deterministic. If an attacker can observe a sufficient number of outputs from the generator, they can reconstruct the internal state of the algorithm and predict all future outputs with 100% accuracy.
Anatomy of the Flaw
Consider the following snippet, which mirrors the logic often found in poorly implemented "random" token generators:
import random
def generate_token():
# This is NOT secure
return random.getrandbits(32)
print(generate_token())
If this were used to generate a password reset token, an attacker would only need to observe a few tokens to determine the state of the PRNG. Once the state is known, the attacker can generate the next token in the sequence before the legitimate user even receives their email. In the context of the conference raffle, if the script were exposed to an API or a public-facing interface, a participant could theoretically predict the "random" selection process by analyzing previous winners.
The OWASP Cryptographic Failures category covers exactly this type of issue. When developers treat PRNGs as cryptographically secure, they open the door to session hijacking, predictable resource identifiers, and bypassed access controls. The fix is trivial but frequently ignored: use the secrets module.
Moving to Cryptographically Secure Randomness
Python introduced the secrets module specifically to address this class of vulnerability. Unlike the random module, secrets uses the most secure source of randomness provided by the operating system, such as /dev/urandom on Linux or CryptGenRandom on Windows.
For any application where the output must be unpredictable—whether it is a raffle winner, a session cookie, or a temporary file name—the implementation should look like this:
import secrets
def generate_secure_token():
# This is cryptographically secure
return secrets.token_hex(16)
print(generate_secure_token())
This simple change shifts the burden of entropy from a predictable algorithm to the operating system's kernel, which collects noise from hardware interrupts and other unpredictable sources.
Real-World Impact for Pentesters
During a penetration test, identifying the use of weak PRNGs is a high-value finding. Look for patterns in generated tokens. If you see tokens that appear to be hex-encoded but have a limited character set or follow a predictable pattern over time, you are likely looking at a weak PRNG implementation.
In bug bounty programs, this often manifests in "insecure direct object reference" (IDOR) scenarios where the object identifier is a predictable token. If you can predict the next token, you can access data belonging to other users. Always check the source code if it is available, or perform statistical analysis on a large sample of generated tokens to check for bias or patterns.
Defenders should treat randomness as a critical security requirement. If your application generates any form of secret, token, or identifier, audit your codebase for the random module. Replace every instance with the secrets module or an equivalent library that interfaces with the system's CSPRNG (Cryptographically Secure Pseudo-Random Number Generator).
Security is rarely about the complexity of the attack; it is about the simplicity of the oversight. The next time you see a "random" feature in an application, don't assume it's secure. Treat it as a potential entry point and verify the entropy source. You might be surprised at how often developers choose convenience over the fundamental principles of cryptography.
Tools Used
Up Next From This Conference
Similar Talks

We are currently clean on OPSEC: The Signalgate Saga

BestFit: Unveiling Hidden Transformers in Windows ANSI




