Security switch selection three major mistakes have you stepped on?

After the security is transferred from analog to IP, the application of the network in security is more and more important and more and more complicated. The author has been in the security network for many years and found that the technical personnel in the industry have gone a lot of detours. Whether it is security vendors, integrators or end users, there are many misunderstandings about how switches are selected and video jams.
Many so-called selection experiences and documents circulating in the market are actually full of pits, such as a piece of "A switch can bring a few cameras in the end" ..., so today I briefly summarize these common sense errors.
Misunderstanding 1: Calculate the number of camera belts blindly based on the exchange capacity
This algorithm simply divides the switching capacity of the switch by the code stream of the camera and then calculates the number of bands.
According to this theory, a 24-port full Gigabit non-network management switch has a port rate of 1000 Mbps per port. As long as the total port is connected to a camera that does not exceed 250 channels of 4M code stream, there is no problem, then the entire switch can take How many thousand roads?
Estimated and according to the so-called actual performance is generally only 60 to 70% of the theoretical value, then each of the lower ports can also be connected to no more than 150 channels, there is no problem, how can the whole machine take more than 1000 channels?
Can this be the case?
According to this logic, there is no difference between the Gigabit idiot machine and the network management machine. When we analyze the video Kardon's network reasons according to this theory, we will analyze your doubts about life.
Finally, it is found that the bandwidth design of each node of the network is completely no problem, the traffic does not have a bottleneck at all, the running state of the switch looks normal, but the video is Kaka, mosaic flower, how to explain?
Myth 2: The actual performance of the switch is generally only 60~70% of the theoretical value?
Many people, even switch manufacturers, will do the pre-sales, when you do the security program, tell you that the actual forwarding performance of the switch is only 60%~70% of the theoretical value, so the amount of standby should be calculated.
I have been working in the digital communication field for 17 years, I have been waiting for equipment manufacturers, and I have been waiting for the chip company. At least during this limited period of work, I have never seen the actual performance of a certain chip company. Capacity) does not reach the theoretical value.
24-port full Gigabit switch chip, switching capacity must be ≥48Gbps24 (24 ports) X1G (1000M) X2 (full duplex) = 48G, otherwise it will not reach the line speed forwarding, I think no chip design company will commit This low-level common-sense error, and no formal switch manufacturer will bring a switch that does not reach the wire-speed forwarding performance to the market (the chassis-type switch line card has blocking is different).
If you have encountered the exchange capacity of the switch is not up to the theoretical value, only 60 to 70% of the performance, then congratulations, you have successfully purchased a defective product, this defective product can not do it. This product is only possible if the R&D design or production process is defective and it is directly marketed without professional testing. Similarly, the packet forwarding rate is also the same.
Myth 3: Switch selection based on experience
At present, when various network equipment manufacturers are involved in security network projects, in addition to selecting by port specifications and selecting by switching capacity, there is also an important means to select models based on past project experience.
However, we often encounter such a situation, the same switch is in different projects, and the network size of these projects is similar, the number of cameras and the code stream are similar, and the networking scheme is the same.
The A project is good, the B project is good, but the C project will appear Karton, WHY? Immediately contact the manufacturer to change one, just change it, um, it seems that luck is not good. However, after a while, Kardon, WHY?
Constantly change equipment, restart equipment, adjust network structure, etc. Repeated toss, maybe better, maybe it will be random, exhausted, and ultimately can not be determined, even the first-line network brand manufacturers can not give an accurate reason.
So, what is the reason for the video card?
First, let's take a brief look at the basic principles of video streaming:
The video stream is composed of I frame and P frame, wherein the I frame is a super large frame. In the process of network transmission, the loss of any one of the I frames will result in the video being unable to be imaged. At the same time, due to the real-time requirement of the video, Generally, the transmission mechanism of UDP is adopted, that is, the packet loss is not retransmitted. Therefore, basically, as long as the network loses packets, it will be stuck.
Second, let's briefly introduce the exchange principle of the switch:
When a 100M port transmits a 1M data stream to another 100M port, it is transmitted at a rate of 100M for 1/100 second. If there is another 100M port in 1/100 seconds, it also transmits 1M data stream to the same 100M port. Although the two ports add up to 2M data stream, it is far from the 100M bandwidth bottleneck, but it is also congested.
Similarly, the 1000M port can only accept one 1000M port to transmit data at the same time, but can accept 10 100M ports to transmit data at the same time, but more than 10 will also be congested.
Therefore, traffic (bandwidth) and rate are two concepts that cannot be confused. No matter how large the data stream is transmitted, the transmission rate is 100M or 1000M, but the length of time required for different data stream transmissions is different. When the rates are the same, two or more ports will be congested when they transmit to the same port at the same time. At this point, if the cache can store the congested data stream, it will not lose packets. If the cache is not stored, it will drop packets.
Through the simple analysis of the above two points, we can understand that the more the number of video streams transmitted by the switch, the greater the possibility of instantaneous concurrent, the higher the probability of congestion, which is why the aggregation layer or the core layer is more The reason for the easy congestion, especially at the core layer, is that the number of video streams that are transmitted through is very large, and thousands of roads on the entire network are transmitted through the core switch.
Here again, it is important to emphasize that in the security network, most of the Caton packet loss is caused by this congestion, not by the forwarding performance. These are two completely different concepts.
Note: Many customers will confuse the delay with the card. The delay refers to the time difference between the image data collected from the front-end network camera and the monitoring device at the user end. The images captured by the camera are processed by compression coding, network transmission, decoding output display, etc. Although these processes are very short, we can still feel that the displayed image has a lag, which is the image delay. However, as long as the delay does not exceed 1S, it is difficult to intuitively feel, and most scenes do not affect the application. Unless it is a specific industry area that requires millisecond processing based on video analysis, the latency is critical. The delay does not result in image loss and no packet loss. And Carton will cause image loss, which is caused by packet loss.
In addition to congestion and packet loss, there is another reason that is caused by the quality of the wiring engineering, such as aging of the circuit, oxidation of the crystal head, and failure of the crystal head. These conditions will cause packet loss similar to the FCS error frame. Strictly speaking, this has nothing to do with the switch, so I won't go into details here.
1. According to the code stream and quantity of the camera, the switch specification is selected and the networking scheme is designed.
The author believes that with the popularity of the network in the security, the technical capabilities of the practitioners will gradually strengthen, and the network failures caused by the specification selection and networking solutions will be less and less. If the bandwidth bottleneck is caused for this reason, it is really too low. A network has a total of XX X-stream cameras, and the access layer should select which ports (port number and port rate) of the switch, and how many switch ports should be selected at the aggregation layer. How to choose the core layer? I don't waste a lot of this kind of knowledge to write about this kind of simple knowledge. There are a lot of online.
At the same time, in order to cope with sudden traffic, in the selection and design, the bandwidth usage of the switch port is recommended not to exceed 70%, try to control within 60%. Note: It is not because the actual performance is only 60~70% of the theoretical value, but to prevent sudden flow, it is not recommended to use too high. Forwarding performance is the first step to ensure, and then consider avoiding congestion.
2. Use a cached network management switch as much as possible.
The cache can reduce the packet loss caused by congestion. In theory, if the cache is large enough, the packet loss is zero, and the video will not be stuck due to network reasons. Well, once a customer asked the author, how to calculate the XX road XX stream camera should use a large cache switch? In theory, it can be calculated, but after you have finished calculating it, you find that there is no switch on the earth that can satisfy this cache requirement.
Congestion is probabilistic. It is impossible for each port to be congested at the same time, so the chip company will not design the cache this way because the cost of the cache is too high. Under normal circumstances, the switch with the higher end of the switch has a larger switch cache. This is why when we choose a network management type, or a three-layer switch, the probability of losing packets is lower. The same 24-port Gigabit switch, the unmanaged cache may only be a few hundred K, and the Layer 3 switch cache may have tens of M.
Therefore, when the budget is sufficient and the cost is acceptable, choose a cached network management switch as much as possible, because this is the law when the chip company designs the chip. Popularize a small knowledge, the same 24-port Gigabit non-network management chip and 24-port Gigabit three-layer chip, the exchange capacity is the same, different kinds of table item capacity, cache size, business characteristics (function) and so on. For device manufacturers, when developing a switch, you can only choose to cache a large chip as much as possible, and you cannot change the size of the cache. This is the hardware characteristics of the chip.

Speaker Cover


Function: Car speaker mask, also known as dust cover. It is a protective cover composed of plastic ring and iron mesh, which is used to protect the outer surface of the speaker from external dust and foreign matters, and maintain the sound quality effect. It also looks better on speakers.

Material: Plastic ABS or Plastic ABS+ iron mesh.
Surface treatment process: electroplating, electrophoresis, baking black. Color can also be customized according to customer requirements.

Modular products: there are a variety of different specifications for customers to choose.

Private mold products: can be customized, according to customer samples or drawings for development and production.


Speaker Cover/Product Display


2

4

2

4



Speaker Cover

Speaker Cover,Speaker Mesh,Speaker Grill Cover,Waterproof Speaker Covers

Fengshun County Boli Electroacoustic Co., Ltd. , https://www.gdfsboli.com