gnrc, nimble/ble: notify network layer of BLE connection events#21608
gnrc, nimble/ble: notify network layer of BLE connection events#21608benpicco merged 9 commits intoRIOT-OS:masterfrom
Conversation
| /** | ||
| * @brief Iterate over all parents in all DODAGs with @p IPv6 address. | ||
| * | ||
| * @param[in] idx Index to start searching from. | ||
| * @param[in] addr IPV6 address of the parent. | ||
| * @param[out] parent Pointer to the parent if one was found. Otherwise NULL. | ||
| * | ||
| * @return Index > 0 to continue next search from, if parent was found. | ||
| * @return -ENONENT if not found | ||
| */ | ||
| int gnrc_rpl_parent_iter_by_addr(ipv6_addr_t *addr, gnrc_rpl_parent_t **parent, int idx); | ||
|
|
There was a problem hiding this comment.
Quoting from there for reviewer's convenience:
This function is a bit weird, but it is needed because, theoretically, if we have more than one RPL instance, the same node can be present as parent in multiple dodags, and thus appear multiple times in the gnrc_rpl_parents list.
We want to mark the parent as unreachable in all dodags.
I'm not knowledgeable enough with RPL to comment on this.
sys/include/net/gnrc/ipv6/nib/nc.h
Outdated
| /** | ||
| * @brief Adds an unmanaged neighbor entry to the NIB if the interface represents a 6LN node | ||
| * and the IPv6 address can be constructed from the L2 address @p l2addr. | ||
| * | ||
| * @param[in] iface The interface to the neighbor. | ||
| * @param[in] l2addr The neighbor's layer 2 address. | ||
| * @param[in] l2addr_len Length of @p l2addr. | ||
| * | ||
| * @return 0 on success. | ||
| * @return -ENOTSUP, if the interface does not represent a 6LN or when | ||
| * gnrc_netif_t::device_type of the iface does not support IID conversion. | ||
| * @return -EINVAL, when @p addr_len is invalid for the | ||
| * gnrc_netif_t::device_type of @p netif. | ||
| * @return -ENOMEM, if no space is left in neighbor cache. | ||
| */ | ||
| int gnrc_ipv6_nib_nc_try_set_6ln(unsigned iface, const uint8_t *l2addr, | ||
| size_t l2addr_len); |
There was a problem hiding this comment.
Only possible for 6LN nodes because otherwise we can't statically build an IPv6 address from the link-layer address. The IPv6 address is build as a reverse of the _resolve_addr_from_ipv6 that is used when resolving IPv6-> L2address, following the stateless address autoconfiguration section in the RFC.
mguetschow
left a comment
There was a problem hiding this comment.
Thanks for your PR! Already discussed the design offline, so that looks fine by me. But maybe someone with more knowledge with GNRC and the NETAPI could also comment on the approach (@miri64 maybe?).
Rather high-level comments from my side as I'm not very familiar with GNRC and RPL.
| /** | ||
| * @brief Iterate over all parents in all DODAGs with @p IPv6 address. | ||
| * | ||
| * @param[in] idx Index to start searching from. | ||
| * @param[in] addr IPV6 address of the parent. | ||
| * @param[out] parent Pointer to the parent if one was found. Otherwise NULL. | ||
| * | ||
| * @return Index > 0 to continue next search from, if parent was found. | ||
| * @return -ENONENT if not found | ||
| */ | ||
| int gnrc_rpl_parent_iter_by_addr(ipv6_addr_t *addr, gnrc_rpl_parent_t **parent, int idx); | ||
|
|
There was a problem hiding this comment.
Quoting from there for reviewer's convenience:
This function is a bit weird, but it is needed because, theoretically, if we have more than one RPL instance, the same node can be present as parent in multiple dodags, and thus appear multiple times in the gnrc_rpl_parents list.
We want to mark the parent as unreachable in all dodags.
I'm not knowledgeable enough with RPL to comment on this.
|
Latest Murdock failure is unrelated I think? Edit: just saw #21621 (comment). |
5423c6a to
a7d5ac9
Compare
crasbe
left a comment
There was a problem hiding this comment.
I'm unfamiliar with the network subsystem, so have some style related review comments :D
benpicco
left a comment
There was a problem hiding this comment.
This sounds pretty useful as a general mechanism - some first round of comments
miri64
left a comment
There was a problem hiding this comment.
Cursory review after scrolling through most of the code.
|
I ran spec07 (multi hop, which includes unmodified RPL) of the release specifications to check if this breaks something. Sadly, it does.Interestingly, also the non-RPL tasks are failing, so I think something in IPv6 is broken. On master it ran fine:(see also the latest run of the weekly release-tests though that is already 4 days old) You can test yourself by cloning https://github.com/RIOT-OS/Release-Specs and running RIOTBASE="<path to RIOT repo>" tox -e test -- -k "spec07" --non-RCin the root directory of that repo. You need an IoT-LAB account for that and be authenticated with that with |
crasbe
left a comment
There was a problem hiding this comment.
Just some minor style things.
benpicco
left a comment
There was a problem hiding this comment.
Looks good, please squash!
|
Uh looks like CI is not happy yet |
|
Sorry for the many pushes. Took me a few attempts to get the conditional-compilation gates right. |
|
Please squash! |
314ab83 to
d19e835
Compare
The netapi notify API enables protocol-independent, cross-layer notification events. Lower layers in the network stack can use this to inform upper layers of network events.
If a new L2 connection was established, the node should be inserted in the neighbor cache. For 6LN nodes, NIB will never start the usual neighbor discovery process because it can directly resolve the link-layer address from the IPv6 address. Thus we have to manually add such nodes to the NC by building the IPv6 address from the L2 address. If a L2 connection closed, the node is removed again from the NC.
Iterate through all parents whose address match `addr`. In most cases there will only be a single parent, but if `GNRC_RPL_INSTANCES_NUMOF > 1` then one node can be parent in multiple DODAGs.
Handle netapi notification for discovered and unreachable neighbors. For newly discovered neighbors, send a DIS message to the node. For unreachable nodes trigger a timeout for RPL parents that match the address.
`gnrc_rpl` now receives a notification if a new node is reachable (i.e., connected) and sends a DIS. No need anymore to also do it in rpble.
d19e835 to
4a326d6
Compare
|
Squash done.
Murdock failure likely unrelated as well, see #21608 (comment). |
|
The problem is that A (slightly cheeky) quick fix would be --- i/sys/include/atomic_utils.h
+++ w/sys/include/atomic_utils.h
@@ -144,9 +144,7 @@
#include "atomic_utils_arch.h" /* IWYU pragma: export */
-#ifdef __cplusplus
-extern "C" {
-#endif
+#ifndef __cplusplus
/* NOLINTBEGIN(bugprone-macro-parentheses, readability-inconsistent-declaration-parameter-name)
*
@@ -1748,9 +1746,7 @@ static inline uint64_t semi_atomic_fetch_and_u64(volatile uint64_t *dest,
}
#endif /* HAS_ATOMIC_FETCH_AND_U64 || !HAS_ATOMIC_STORE_U64 */
-#ifdef __cplusplus
-}
-#endif
+#endif /* !__cplusplus */
/* NOLINTEND(bugprone-macro-parentheses, readability-inconsistent-declaration-parameter-name) */
/** @} */Maybe @maribu has a better idea? |
|
ah or we just add --- i/pkg/opendsme/Makefile.dep
+++ w/pkg/opendsme/Makefile.dep
@@ -29,7 +29,7 @@ USEMODULE += ieee802154
USEMODULE += ztimer_usec
USEMODULE += cpp11-compat
-CXXEXFLAGS += -Wno-unused-parameter -Wno-pedantic -Wno-missing-field-initializers -Wno-unused-but-set-variable -Wno-maybe-uninitialized -Wno-unused-variable -Wno-reorder -Wno-address -Wno-sign-compare -Wno-unused-function
+CXXEXFLAGS += -Wno-unused-parameter -Wno-pedantic -Wno-missing-field-initializers -Wno-unused-but-set-variable -Wno-maybe-uninitialized -Wno-unused-variable -Wno-reorder -Wno-address -Wno-sign-compare -Wno-unused-function -fpermissive
FEATURES_REQUIRED += cpp # basic C++ support
FEATURES_REQUIRED += libstdcpp # libstdc++ support (for #include <cstdio>) |
|
It would actually be nice to be able to use The following patch should fix compilation on C++: diff --git a/sys/include/atomic_utils.h b/sys/include/atomic_utils.h
index 97b3aaaaa9..2c685f4066 100644
--- a/sys/include/atomic_utils.h
+++ b/sys/include/atomic_utils.h
@@ -274,14 +274,14 @@ static inline uint64_t atomic_load_u64(const volatile uint64_t *var);
static inline unsigned atomic_load_unsigned(const volatile unsigned *var)
{
if (sizeof(uint64_t) == sizeof(unsigned)) {
- return atomic_load_u64((volatile void *)var);
+ return atomic_load_u64((const volatile uint64_t *)(uintptr_t)var);
}
if (sizeof(uint32_t) == sizeof(unsigned)) {
- return atomic_load_u32((volatile void *)var);
+ return atomic_load_u32((const volatile uint32_t *)(uintptr_t)var);
}
- return atomic_load_u16((volatile void *)var);
+ return atomic_load_u16((const volatile uint16_t *)(uintptr_t)var);
}
/**
@@ -362,13 +362,13 @@ static inline void atomic_store_u64(volatile uint64_t *dest, uint64_t val);
static inline void atomic_store_unsigned(volatile unsigned *dest, unsigned val)
{
if (sizeof(uint64_t) == sizeof(unsigned)) {
- atomic_store_u64((volatile void *)dest, val);
+ atomic_store_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
else if (sizeof(uint32_t) == sizeof(unsigned)) {
- atomic_store_u32((volatile void *)dest, val);
+ atomic_store_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
else {
- atomic_store_u16((volatile void *)dest, val);
+ atomic_store_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
}
@@ -462,14 +462,14 @@ static inline unsigned atomic_fetch_add_unsigned(volatile unsigned *dest,
unsigned summand)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return atomic_fetch_add_u64((volatile void *)dest, summand);
+ return atomic_fetch_add_u64((volatile uint64_t *)(uintptr_t)dest, summand);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return atomic_fetch_add_u32((volatile void *)dest, summand);
+ return atomic_fetch_add_u32((volatile uint32_t *)(uintptr_t)dest, summand);
}
- return atomic_fetch_add_u16((volatile void *)dest, summand);
+ return atomic_fetch_add_u16((volatile uint16_t *)(uintptr_t)dest, summand);
}
/** @} */
@@ -528,14 +528,14 @@ static inline unsigned atomic_fetch_sub_unsigned(volatile unsigned *dest,
unsigned subtrahend)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return atomic_fetch_sub_u64((volatile void *)dest, subtrahend);
+ return atomic_fetch_sub_u64((volatile uint64_t *)(uintptr_t)dest, subtrahend);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return atomic_fetch_sub_u32((volatile void *)dest, subtrahend);
+ return atomic_fetch_sub_u32((volatile uint32_t *)(uintptr_t)dest, subtrahend);
}
- return atomic_fetch_sub_u16((volatile void *)dest, subtrahend);
+ return atomic_fetch_sub_u16((volatile uint16_t *)(uintptr_t)dest, subtrahend);
}
/** @} */
@@ -593,14 +593,14 @@ static inline unsigned atomic_fetch_or_unsigned(volatile unsigned *dest,
unsigned val)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return atomic_fetch_or_u64((volatile void *)dest, val);
+ return atomic_fetch_or_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return atomic_fetch_or_u32((volatile void *)dest, val);
+ return atomic_fetch_or_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
- return atomic_fetch_or_u16((volatile void *)dest, val);
+ return atomic_fetch_or_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
/** @} */
@@ -658,14 +658,14 @@ static inline unsigned atomic_fetch_xor_unsigned(volatile unsigned *dest,
unsigned val)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return atomic_fetch_xor_u64((volatile void *)dest, val);
+ return atomic_fetch_xor_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return atomic_fetch_xor_u32((volatile void *)dest, val);
+ return atomic_fetch_xor_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
- return atomic_fetch_xor_u16((volatile void *)dest, val);
+ return atomic_fetch_xor_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
/** @} */
@@ -723,14 +723,14 @@ static inline unsigned atomic_fetch_and_unsigned(volatile unsigned *dest,
unsigned val)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return atomic_fetch_and_u64((volatile void *)dest, val);
+ return atomic_fetch_and_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return atomic_fetch_and_u32((volatile void *)dest, val);
+ return atomic_fetch_and_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
- return atomic_fetch_and_u16((volatile void *)dest, val);
+ return atomic_fetch_and_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
/** @} */
@@ -935,14 +935,14 @@ static inline unsigned semi_atomic_fetch_add_unsigned(volatile unsigned *dest,
unsigned summand)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return semi_atomic_fetch_add_u64((volatile void *)dest, summand);
+ return semi_atomic_fetch_add_u64((volatile uint64_t *)(uintptr_t)dest, summand);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return semi_atomic_fetch_add_u32((volatile void *)dest, summand);
+ return semi_atomic_fetch_add_u32((volatile uint32_t *)(uintptr_t)dest, summand);
}
- return semi_atomic_fetch_add_u16((volatile void *)dest, summand);
+ return semi_atomic_fetch_add_u16((volatile uint16_t *)(uintptr_t)dest, summand);
}
/** @} */
@@ -1001,14 +1001,14 @@ static inline unsigned semi_atomic_fetch_sub_unsigned(volatile unsigned *dest,
unsigned subtrahend)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return semi_atomic_fetch_sub_u64((volatile void *)dest, subtrahend);
+ return semi_atomic_fetch_sub_u64((volatile uint64_t *)(uintptr_t)dest, subtrahend);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return semi_atomic_fetch_sub_u32((volatile void *)dest, subtrahend);
+ return semi_atomic_fetch_sub_u32((volatile uint32_t *)(uintptr_t)dest, subtrahend);
}
- return semi_atomic_fetch_sub_u16((volatile void *)dest, subtrahend);
+ return semi_atomic_fetch_sub_u16((volatile uint16_t *)(uintptr_t)dest, subtrahend);
}
/** @} */
@@ -1066,14 +1066,14 @@ static inline unsigned semi_atomic_fetch_or_unsigned(volatile unsigned *dest,
unsigned val)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return semi_atomic_fetch_or_u64((volatile void *)dest, val);
+ return semi_atomic_fetch_or_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return semi_atomic_fetch_or_u32((volatile void *)dest, val);
+ return semi_atomic_fetch_or_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
- return semi_atomic_fetch_or_u16((volatile void *)dest, val);
+ return semi_atomic_fetch_or_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
/** @} */
@@ -1132,14 +1132,14 @@ static inline unsigned semi_atomic_fetch_xor_unsigned(volatile unsigned *dest,
unsigned val)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return semi_atomic_fetch_xor_u64((volatile void *)dest, val);
+ return semi_atomic_fetch_xor_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return semi_atomic_fetch_xor_u32((volatile void *)dest, val);
+ return semi_atomic_fetch_xor_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
- return semi_atomic_fetch_xor_u16((volatile void *)dest, val);
+ return semi_atomic_fetch_xor_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
/** @} */
@@ -1198,14 +1198,14 @@ static inline unsigned semi_atomic_fetch_and_unsigned(volatile unsigned *dest,
unsigned val)
{
if (sizeof(unsigned) == sizeof(uint64_t)) {
- return semi_atomic_fetch_and_u64((volatile void *)dest, val);
+ return semi_atomic_fetch_and_u64((volatile uint64_t *)(uintptr_t)dest, val);
}
if (sizeof(unsigned) == sizeof(uint32_t)) {
- return semi_atomic_fetch_and_u32((volatile void *)dest, val);
+ return semi_atomic_fetch_and_u32((volatile uint32_t *)(uintptr_t)dest, val);
}
- return semi_atomic_fetch_and_u16((volatile void *)dest, val);
+ return semi_atomic_fetch_and_u16((volatile uint16_t *)(uintptr_t)dest, val);
}
/** @} */
|
068f4f2 to
1f3daf3
Compare
Contribution description
This PR solves two issues that have been previously discussed in #21580 and #21586:
Both issues require communication between different netapi layers and are related with each other, so I bundled them in one PR.
Cross-layer Communication with netapi
With the current netapi it is not possible to send arbitrary data between networking threads.
The API only allows to either dispatch packets to multiple threads that are registered in the netreg (
gnrc_netapi_dispatch), or to set/get options to/from a single thread whose PID is known (gnrc_netapi_{get,set}).In my use-case, however, a lower layer (BLE) wants to notify other layers higher up the stack (IPv6, RPL) of an event (e.g., connection established). Ideally, the lower layer shouldn't need to know if/ what higher layers are present, similarly to the case when packets are passed up/ down the stack.
netapi
notifyTo solve the above, I added a new netapi function
gnrc_netapi_notifythat dispatches an "notification"-event to all subscribed threads. Like ingnrc_netapi_dispatchthe target threads are queried from netreg, but withnotifythe sending is done synchronously by blocking for an ACK. The latter is needed because contrary to the packets that get allocated in thepktbuf, the notify event data is stack allocated and thus blocking is needed to ensure the data was read before the function returns. This is not ideal, but I wasn't able to come up with a better solution.Handling BLE connection events
The
notifyevent API is used:In 2. the notification is sent from the IPv6 thread rather than from BLE because address translation from l2addr->ipv6 address is solved on that layer.
Testing procedure
Tested on FIT IoT-Testlab with 9 nrf52dk nodes by using thegnrc_networkingexample +nimble_rpblemodule.The entries are added/ removed from the NIB neighbor cache as expected, and the RPL tree updates when connections between parents and children close.
Edit:
Sorry, misunderstood the section as "how did I test it". Actual test instructions:
USEMODULE += nimble_rpbleto thegnrc_networkingand run on min 2. nodes (I tested with nrf52dk)ifconfig 8 add 2001:db8:1::1/64&&rpl root 1 2001:db8:1::1.ble infoandrplcommands)nib neigh)Issues/PRs references
Replaces #21586.
Part of #21574.
Related to (and might solve) #21580.
Additional Nodes
Happy to split up the PR in separate sub-PRs if that's easier, but as already mentioned it all kinda relates to each other.
See self-review for discussion points.