-
Notifications
You must be signed in to change notification settings - Fork 0
Gcs movearound #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Remove Dockerfile, .dockerignore, LICENSE, README (will add section about opengcs to hcsshim README), CODEOWNERS, .travis.yml, go.mod and go.sum and .gitignore. * Add all of the sections that didn't exist in our gitignore so we can properly ignore Makefile output and other things we didn't have to worry about in hcsshim. Signed-off-by: Daniel Canter <[email protected]>
No need for it, all the deps wil be going in the hcsshim vendor directory. Signed-off-by: Daniel Canter <[email protected]>
* Move hack/, init/, vsockexec/ and Makefile top level Signed-off-by: Daniel Canter <[email protected]>
These were just exact copies of the log and oc packages already in hcsshim. Signed-off-by: Daniel Canter <[email protected]>
Move all of opengcs/internal to internal Signed-off-by: Daniel Canter <[email protected]>
* Move service directory to internal and gcstools and gcs binaries (main.go) to cmd Signed-off-by: Daniel Canter <[email protected]>
Signed-off-by: Daniel Canter <[email protected]>
* Pull in the deps we use for opengcs Signed-off-by: Daniel Canter <[email protected]>
Signed-off-by: Daniel Canter <[email protected]>
|
@katiewasnothere @kevpar @ambarve @anmaxvl @jstarks. Here's what I've got currently for the rearrange. Mostly just move everything to internal, pull in the opengcs deps, touch up the makefile, remove any unnecessary files and redundant packages that were identical across the two repos and then build the binaries out of /cmd. This is pretty cool! |
|
Looks like we need to gofmt some stuff too to satisfy the almighty linter but just assume that's done already 😉 |
|
So, given this is a PR to your own repo, is this for early feedback, and then you will integrate it into the hcsshim PR? |
|
@kevpar I remember the original plan was to just plop everything in and validate the approach for moving everything around first so we could merge that right after, but I don't see a need for that to happen right after. We could just do this as the only PR. e.g. one PR to bring everything in AND rearrange things into the right spots. But yes this was for feedback on the rearrange (realized I didn't even fully reply) |
| @@ -1,38 +0,0 @@ | |||
|
|
|||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we putting this content in the main readme?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, was curious how we wanna word things now, as the readme already seems pretty bare given how much hcsshim actually contains. Might be a good chance to revamp the README in general
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A good start might be just putting the opengcs README in as a top-level section.
| @@ -1,57 +0,0 @@ | |||
| # platform=linux | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we believe this is not needed anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd remembered hearing in passing this was obsolete now but should probably confirm that first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is at least the kind of case where it's really good to call out in the commit description "Removing Dockerfile as we believe it is no longer needed". Currently if we come back and look at this commit in the future we have NO idea why it was actually removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call, the commit mentions deleting it but no explanation 😆
| @@ -1,8 +0,0 @@ | |||
| services: | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we adding gcs building to the main CI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea that was the plan. Probably be better to do that in here as well instead of a separate pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think addressing this here would be best.
I don't see issues with putting this into the existing PR. In general it just wasn't clear to me what the plan was from reading this PR, so good to call that out. |
|
Is it desirable for internal to have a flat structure? A lot of unrelated code is going to be at the same level. |
|
@msscotb Might be useful to have the linux specific packages (or maybe all of the opengcs internal packages) in /internal/opengcs/, or similar. It is a bit weird to end up in a package that's linux specifc in internal I will admit |
|
@msscotb From our talk earlier, sounds like I'll play around with making an |
|
heads up, nvidiaPrestartHook.go no longer exists in opengcs due to microsoft/opengcs@6c9e795 |
|
@katiewasnothere Yea I saw. If any of the other PRS make it in before this I'd just redo these steps based on the latest commit on master |
Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
Signed-off-by: Daniel Canter <[email protected]>
Signed-off-by: Daniel Canter <[email protected]>
Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
This change brings in the bindfltapi.dll bindings for setting up directory binds. This is intended to be used for job container volume mounts. This change also brings in a small constant that allows promoting a job object to a silo (not a server silo, aka windows server containers). This is because bindflt has the ability to create per silo bindings which is very useful. E.g. Map D:\my\dir to C:\test for silo #1 Map D:\my\other\dir to C:\test for silo #2 Processes in silo #1 and #2 will both see different directories, as well as processes on the host. If C:\test doesn't exist on the host, host processes simply won't see anything, and if it does exist they will continue to see whatever was always there. Signed-off-by: Daniel Canter <[email protected]>
This change brings in the bindfltapi.dll bindings for setting up directory binds. This is intended to be used for job container volume mounts. This change also brings in a small constant that allows promoting a job object to a silo (not a server silo, aka windows server containers). This is because bindflt has the ability to create per silo bindings which is very useful. E.g. Map D:\my\dir to C:\test for silo #1 Map D:\my\other\dir to C:\test for silo #2 Processes in silo #1 and #2 will both see different directories, as well as processes on the host. If C:\test doesn't exist on the host, host processes simply won't see anything, and if it does exist they will continue to see whatever was always there. Signed-off-by: Daniel Canter <[email protected]>
Adjust comment to not mention cri. Signed-off-by: Daniel Canter <[email protected]>
Refactor the wcow process dump logic to be easier to read. Signed-off-by: Daniel Canter <[email protected]>
No description provided.