Skip to content

CFP for High-Scale IPcache feature#7

Merged
joestringer merged 2 commits intomainfrom
pr/pchaigno/high-scale-ipcache
Aug 9, 2024
Merged

CFP for High-Scale IPcache feature#7
joestringer merged 2 commits intomainfrom
pr/pchaigno/high-scale-ipcache

Conversation

@pchaigno
Copy link
Copy Markdown
Member

No description provided.

@pchaigno pchaigno force-pushed the pr/pchaigno/high-scale-ipcache branch from 0ca2aa2 to 47be1e0 Compare April 28, 2023 10:28
Copy link
Copy Markdown
Member

@joestringer joestringer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense overall and the high level goals are very useful to understand the design changes necessary to implement that approach 👍

I have mainly two aspects for consideration in threads below, along with misc feedback:

  1. I don't understand why the tunnel source must be the Pod IP range. This seems to impose a Pod Prefix routability requirement on the network, which means that even if Cilium doesn't need to know about every destination Pod in the cluster(mesh), the network at least needs to know about the routability information of every Pod in the cluster(mesh).
  2. This CFP seems to on one hand specify egress policy as a non-goal, but at the same time has quite a few details about making egress policy work (particularly around the DNS handling and ToFQDNs policy). Do you intend to expand the scope of this CFP or move those aspects into a subsequent followup?

@pchaigno pchaigno force-pushed the pr/pchaigno/high-scale-ipcache branch 4 times, most recently from 9fff40e to be84a53 Compare May 1, 2023 22:38
@pchaigno pchaigno requested a review from joestringer May 1, 2023 22:39
@linsun
Copy link
Copy Markdown

linsun commented Jan 12, 2024

Hi there! I wanted to mention Rob proposed xDS adapter for Cilium which may be relevant here, PTAL: https://docs.google.com/document/d/1U4pO_dTaHERKOtrneNA8njW19HSVbq3sBM3x8an4878/edit#heading=h.y3v1ksm0ev6r

@pchaigno pchaigno force-pushed the pr/pchaigno/high-scale-ipcache branch from 26b63c5 to 99d52ea Compare August 9, 2024 11:48
pchaigno and others added 2 commits August 9, 2024 13:48
Co-authored-by: Joe Stringer <[email protected]>
Signed-off-by: Paul Chaignon <[email protected]>
@pchaigno pchaigno force-pushed the pr/pchaigno/high-scale-ipcache branch from 99d52ea to 66785a1 Compare August 9, 2024 11:49
Copy link
Copy Markdown
Member

@joestringer joestringer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I recall, some parts of this were implemented, but not all. Additionally, some other features like CiliumCIDRGroups may have some overlap which could be worth considering if this is picked up again.

At a high level I scanned through and my feeling at this point is that the core idea of reducing pressure on the ipcache bpf map by changing the model of expected data in there makes sense as a specific feature. I think there was another set of changes in this CFP that were particularly opinionated around exactly how the networking works between nodes which I would ideally split out and consider as a separate point. The latter is related to the same goal of high scalability but IMO has a different design space and implications vs. the ipcache map management problems. These two could be combined for an opinionated architecture but otherwise do not seem to be fundamentally tied to one another.

All that said, I am happy to merge this in the "Dormant" state as-is. If and when we pick this up to move it forward, consider revisiting the above feedback to scope some of the discussions.

@joestringer joestringer merged commit 162cd28 into main Aug 9, 2024
@joestringer joestringer deleted the pr/pchaigno/high-scale-ipcache branch August 9, 2024 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants