-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[Pangram]: fix redundant double iteration in dig deeper solution #3948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Issue: Current implementation creates an intermediate list from a set, which is redundant and inefficient. **Before:** ```python return len([ltr for ltr in set(sentence.lower()) if ltr.isalpha()]) == 26 ``` **After:** ```python return len(set(ltr for ltr in sentence.lower() if ltr.isalpha())) == 26 ``` Why this is better: - Eliminates double iteration: Old version iterates once for set(), again for list comprehension - Removes unnecessary list creation: No need to convert set → list just to count - Better memory usage: Generator expression feeds directly into set constructor - Same time complexity but more efficient constant factors The old approach first deduplicates all the characters, then filters alphabetic ones. The new approach filters while building the set, which is the natural order of operations for this problem.
|
the approach's content Set with len will also have to be updated |
|
Hi @princemuel, Thanks for catching this and for submitting a PR. 😄 The code looks good, but I agree that the text for this approach needs updating as well. In particular:
The snippet.txt for this particular approach will need updating. The approaches introduction.md for this exercise will need updating for both the code and any related text. I think there is also updating work to do on the performance article and benchmarking code, since the timings will have to be re-run now that the code for one of the approaches has changed. Let me know once those changes have been made, and I can run everything through the CI and review it. 🎉 |
|
Hi @BethanyG I just saw this. I'm currently swamped with work at the moment but I'd like to work on it over the weekend. |
|
Hiya 👋🏼 So I've updated the relevant docs, code and links 🤖 Please review and let me know if there are any changes to be made. |
BethanyG
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@princemuel Heya!
Apologies for the delay in review. This looks good, and I have approved it.
Thanks so much for doing this. 😄 I've added you in as a contributor for both the approach and the benchmarking. That should up your contribution rep. Please let me know if you see any issues or have any questions.
I will merge to the website shortly.
|
@BethanyG Hiya 👋🏼 Thanks for making me responsible for this PR. I hadn’t planned to implement it myself, but taking ownership turned out to be a great learning experience. Really appreciate the trust and the chance to contribute more deeply. Looking forward to helping out more where I can. |
Issue: Current implementation creates an intermediate list from a set, which is redundant.
Before:
After:
Why this is better:
The old approach first deduplicates all the characters, then filters alphabetic ones. The new approach filters while building the set, which is the natural order of operations for this problem.