-
Notifications
You must be signed in to change notification settings - Fork 240
compiler: Enhance IR to support more advanced parlang (CUDA/HIP/SYCL) features #2717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
901aca9
to
7ca18ee
Compare
7ca18ee
to
af2339d
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2717 +/- ##
==========================================
- Coverage 92.10% 92.08% -0.03%
==========================================
Files 248 248
Lines 49654 49739 +85
Branches 4368 4373 +5
==========================================
+ Hits 45734 45801 +67
- Misses 3213 3228 +15
- Partials 707 710 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments but looks good
return super().__mul__(other) | ||
|
||
|
||
class Terminal: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that really needed?
self.tensor = tensor | ||
|
||
def _hashable_content(self): | ||
return super()._hashable_content() + (self.tensor,) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.tensor._hashable_content()
might be more efficient
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But couldn't that potentially cause key clashes if you somehow had both a FunctionMap
and its tensor
in the same mapper/set? I would say the current one is safer imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes this is the way to go otherwise you might hash the exact same as a pure tensor
|
||
is_Array = True | ||
|
||
_symbol_prefix = 'a' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice
self._directions = frozendict(directions) | ||
directions = directions or {} | ||
directions = {d: v for d, v in directions.items() if d in self.intervals} | ||
directions.update({i.dim: Any for i in self.intervals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth renaming the direction Any
to avoid potential squatting on typing.Any
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should rather ask the python developers to revisit their type hinting crazyness 😂
self._directions = frozendict(directions) | ||
|
||
def __repr__(self): | ||
ret = ', '.join(["%s%s" % (repr(i), repr(self.directions[i.dim])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leftover non-fstring
|
||
frees = obj._C_free | ||
|
||
if obj.free_symbols - {obj}: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kwargs = {'objs' if obj.free_symbols - {obj} else 'standalones': definition,
efuncs': efuncs, 'frees': frees}
storage.update(obj, site, **kwargs)
perhaps?
self.tensor = tensor | ||
|
||
def _hashable_content(self): | ||
return super()._hashable_content() + (self.tensor,) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But couldn't that potentially cause key clashes if you somehow had both a FunctionMap
and its tensor
in the same mapper/set? I would say the current one is safer imo
Closing in favor to #2748 since I've just finished a massive rebase |
Extended version of #2708