Smart Card System - A Level Computing - Marked by (2023)

Smart Card System - A Level Computing - Marked by (1)

AS y A Levelcomputer's science

Smart Computer Lab Access Control Implementation

Use of smart card through secure and reliable cryptography

Chapter 1: Client-Server Technology

1.1 Concept and Client-Server Architecture

The term "client/server" implies that clients and servers are separate logical entities that work together, usually over a network, to accomplish a task. Client/server is more than a client and a server communicating over a network. Client/server uses asynchronous and synchronous messaging techniques with the help of middleware to communicate over a network.

Client/Server uses this approach of a client (UI) and server (database I/O) to provide its robust distributed capabilities. The company, Sigma, has used this technique for more than 15 years to enable its products to be ported to multiple platforms, databases, and transaction processors, while maintaining product marketability and improved functionality from decade to decade.

Sigma's client/server product uses an asynchronous approach of sending a message to request an action and receiving a message containing the requested information. This approach allows the product to send CPU-intensive requests to the server to perform and return the results to the client when finished.

Sigma's architecture is based on reusability and portability. Sigma currently uses a standard I/O routine, which is mutually exclusive from the user interface. Sigma's current architecture supports character-based displays and a variety of databases where the user interface is independent of database access. This architecture corresponds directly to the architecture used in a GUI Client/Server environment. Sigma's client/server product uses an asynchronous approach of sending a message to request an action and receiving a message containing the requested information.

A traditional client/server application is a file server, where clients request files from the file server. This results in the entire file being sent to the client, but requires many message exchanges over the network. Another traditional client/server application is the database server where clients pass SQL requests to the server. The database server executes each SQL statement and passes the results to the client.

A client typically uses Open Database Connectivity (ODBC) to send SQL requests to the server for processing. ODBC provides a standard SQL interface for sending requests to the server. Remote Procedure Call (RPC) is an extension of the traditional client/server model suitable for transaction processing environments. Allows the creation of atransaction server. Clients call a remote procedure and pass parameters to it. A single message allows the Transaction Server to execute stored (compiled) database statements and return the results to the client. This distribution of processing reduces network traffic and improves performance. Site autonomy can also be increased by limiting database modifications to applications running locally.

Remote Procedure Call (RPC). Remote Procedure Call (RPC) is a mechanism that allows programs to communicate with each other.

1.2 Protocolo IP

TCP sends each datagram to IP. Of course, you have to tell IP the Internet address of the computer on the other end. This is all that concerns IP. It doesn't care what's in the datagram, or even in the TCP header. IP's job is simply to find a route for the datagram and get it to the other end. To allow gateways or other intermediate systems to forward the datagram, add your own header. The main elements of this header are the source and destination Internet address (32-bit addresses, such as, the protocol number, and another checksum. The source Internet address is simply the address of the source machine. The destination Internet address is the address of the other machine.

The protocol number tells IP on the other end to send the datagram to TCP. Although most IP traffic uses TCP, there are other protocols that can use IP, so IP must be told which protocol to send the datagram to. Finally, the checksum allows the IP on the other end to verify that the header has not been corrupted during transmission. TCP and IP have separate checksums. IP must be able to verify that the header was not corrupted during transmission, or it could send a message to the wrong place. It is more efficient and secure for TCP to compute a separate checksum for the TCP header and data.

IP addresses are used to deliver data packets over a network and have what is called end-to-end meaning. This means that the source and destination IP addresses remain constant as the packet traverses a network. Every time a packet travels through a router, the router will refer to its routing table to see if it can match the network number of the destination IP address with an entry in its routing table. If a match is found, the packet is forwarded to the next hop router for the destination network in question. If no match is found, then the packet can be forwarded to the router defined as the default gateway, or the router can drop the packet.

Packets are forwarded to a default router in the belief that the default router has more network information in its routing table and will therefore be able to route the packet correctly to its final destination. This is typically used when connecting a LAN with PC to the Internet. Each PC will have the router that connects the LAN to the Internet set as its default gateway. A default gateway looks like this in a host's routing table: the default route will appear as the destination network, and the IP address of the default gateway will appear as the next-hop router.

If the source and destination IP addresses remain constant as the packet progresses through the network, how is the next hop router addressed? In a LAN environment, this is handled by the MAC (Media Access Control) address. The key point is that MAC addresses will change each time a packet travels through a router, however IP addresses will remain constant. Subnet masks are essential tools in network design, but they can make things more difficult to understand. Subnet masks are used to divide a network into a collection of smaller subnets. This can be done to reduce network traffic on each subnet or to make the internetwork more manageable as a whole.

1.3 Network protocol in LAN

Most LANs connect workstations and personal computers. Each node (individual computer) on a LAN has its own CPU with which it runs programs, but can also access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, sending emails or participating in chat sessions.

The following characteristics differentiate one LAN from another:

  • Topology: The geometric arrangement of devices on the network. For example, the devices can be arranged in a ring or in a straight line.
  • Protocols: The coding rules and specifications for sending data. The protocols also determine whether the network uses peer-to-peer or client/server architecture.
  • Media: Devices can be connected using twisted pair cable, coaxial cables, or fiber optic cables. Some networks dispense with the media connection entirely and instead communicate via radio waves.

1.3.1. File Transfer Protocol

File Transfer Protocol (FTP) provides the basics for sharing files between hosts. FTP uses TCP to create a virtual connection to handle data, and then creates a separate TCP connection for data transfers. The control connection uses an image of the TELNET protocol to exchange commands and messages between hosts.

1.3.2. User datagram protocol

The User Datagram Protocol (UDP) provides a simple but unreliable message service for transaction-oriented services. Each UDP header carries a source port identifier and a destination port identifier, allowing higher-level protocols to target specific applications and services between hosts.

1.3.3. Transmission Control Protocol

TCP provides a reliable virtual connection and flow delivery service to applications by using sequenced acknowledgment with packet retransmission when necessary. TCP uses 32 bitssequence of numberswhich counts bytes in the data stream. Each TCP packet contains the initial sequence number of the data in that packet and the sequence number (calledacknowledgment number) of the last byte received from the remote peer. With this information, a sliding window protocol is implemented. Forward and reverse sequence numbers are completely independent, and each TCP peer must keep track of both its own sequence numbering and the numbering used by the remote peer.

1.4 Arquitectura Winsock 2.0

Windows Sockets version 2.0 (WinSock 2) formalizes the API for other protocol suites (ATM, IPX/SPX, and DECnet) and allows them to coexist simultaneously. It still retains full backwards compatibility with the existing 1.1 version, some of which are further clarified, so all existing WinSock applications can continue to run without modification (the only exception being WinSock 1.1 applications that use blocking hooks, in which case they must be rewritten to work without them).

WinSock 2 goes beyond simply allowing multiple protocol stacks to coexist, in theory even allowing the creation of applications that are independent of the network protocol. A WinSock 2 application can transparently select a protocol based on its service needs. The application can adapt to differences in network names and addresses using the mechanisms provided by WinSock 2.

  1. Arquitectura WinSock 2

WinSock 2 has a completely new architecture that provides much more flexibility. The new WinSock 2 architecture enables simultaneous support of multiple protocol stacks, interfaces, and service providers. There's still a DLL on top, but there's another layer below it and a standard service provider interface, which add flexibility.

WinSock 2 adopts the Windows Open Systems Architecture (WOSA) model, which separates the API from the protocol service provider. In this model, the WinSock DLL provides the standard API, and each provider installs its own service provider layer underneath. The API layer "talks" to a service provider through a standardized Service Provider Interface (SPI) and is capable of multiplexing between multiple service providers simultaneously.

1.5 Transmission Control Protocol (TCP)

Initially, TCP was designed to recover from node or line failures where the network propagates routing table changes to all router nodes. Since the update takes some time, it takes time for TCP to start the recovery. TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of lines and equipment to anticipate growth in demand.

TCP treats data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet begins with byte 379642 and contains 200 bytes of data." The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges the data that has been received and retransmits the data that has been lost. The design of TCP means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, although each network has adopted some ad hoc tools. To ensure that all types of systems from all vendors can communicate, TCP/IP is fully standardized on the LAN. However, larger networks based on long distances and telephone lines are more volatile. New technologies emerge that become obsolete in a few years. With phone and cable companies competing to build the National Information Superhighway, no single standard can govern communications across the entire city, across the country, or across the globe.

The original design of TCP/IP as a Network of Networks fits very well within today's technological uncertainty. TCP/IP data can be sent over a LAN, or it can be carried within an internal corporate SNA network, or it can be leveraged on cable television service. Also, machines connected to any of these networks can communicate with any other network through gateways provided by the network provider.

1.6. packet data transmission

The transmission of data packets consists of a series of handshake sequences in which the sending side of the local port of the end node/repeater, the point-to-point connection makes a request, and the other side acknowledges the request. The sending sequence of a data packet transmission is requested from an end node and is controlled by the repeater. When the transmission of a data packet is about to occur:

If an end node has a data packet ready to send, it transmits a Request_Normal or Request_high control signal. Otherwise, the end node transmits the Idle_Up control signal.

  1. The repeater polls all local ports to determine which end nodes are requesting to send a data packet and at what priority level that request is (normal or high).
  2. The repeater selects the next end node with an outstanding high priority request. Ports are selected in port order. If there are no outstanding high priority requests, the next normal priority port (in port order) is selected. This selection causes the selected port to receive the grant signal. Packet transmission begins when the end node detects the grant signal.
  3. The repeater then sends the incoming signal to all other end nodes, alerting them to the possibility of an incoming packet. The repeater decodes the destination address of the frame being transmitted as it is received.
  4. When an end node receives the incoming control signal, it prepares to receive a packet by stopping the transmission of requests and listening on the media for the data packet.
  5. Once the repeater has decoded the destination address, the packet is delivered to the addressed end node(s) and any promiscuous nodes. Those nodes that do not receive the data packet receive the Idle_Down signal from the repeater.
  6. When the end nodes receive the data packet, they return to their state prior to receiving the data packet, either by sending an Idle_Up signal or by requesting a data packet to be sent.

1.7. Conclusion

WinSock 2 has a completely new architecture that provides much more flexibility. The new WinSock 2 architecture enables simultaneous support of multiple protocol stacks, interfaces, and service providers. It is suitable for the win32 platform, however it is designed to be backwards compatible which means even the win95 one can also use it without conflict. The 32-bit wsock32.dll ships with Windows NT and Windows 95 and runs on top of Microsoft's TCP/IP stack. These 32-bit environments also have a winsock.dll file that acts as a "thunk layer" to allow 16-bit WinSock applications to run on top of the 32-bit wsock32.dll file. In contrast, Microsoft's Win32s installs a 32-bit wsock32.dll processor layer in 16-bit Windows environments (Windows version 3.1 and Windows for Workgroups 3.11) over any vendor WinSock DLL currently in use.

LANs are capable of transmitting data at very fast speeds, much faster than data can be transmitted over a telephone line; but distances are limited, and there is also a limit to the number of computers that can be connected to a single LAN. File Transfer Protocol (FTP), User Datagram Protocol (UDP), and Transmission Control Protocol (TCP) are commonly applied in LAN transactions. Data transmission using UDP is faster than TCP, but TCP has better security and data integrity compared to UDP.

The client server architecture basically consists of client machines and server machines. There are 2 forms of client server architecture. The first is the File Server where clients request files from the File Server. This results in the entire file being sent to the client, but requires many message exchanges over the network. Another traditional client/server application is the database server where clients pass SQL requests to the server. The database server executes each SQL statement and passes the results to the client.

The source and destination IP addresses remain constant as the packet traverses a network. The packet can be forwarded to the router defined as the default gateway, or the router can drop the packet. Packets are forwarded to a default router in the belief that the default router has more network information in its routing table and will therefore be able to route the packet correctly to its final destination.

Chapter 2: Cryptography Technology and Data Encryption

2.1 Introduction to encryption and cryptography technology

Encryption is the conversion of a piece of data or plain text into a form, called ciphertext, that cannot be easily understood by unauthorized persons. Meanwhile, decryption is the process of converting encrypted data or ciphertext to its original form so that it can be understood. The conversion of plain text into encrypted text or vice versa, must be applied accompanied by a cryptographic algorithm. Most encryption algorithms are based on the concept of complex mathematics that works in only one direction and are generally based on the difficulties of factoring very large numbers (keys) that are used for encryption. These large numbers are the products of large prime numbers. Many encryption programs use a key to encrypt and decrypt messages, known as symmetric cryptography. This is a quick and simple method to encrypt messages and folders and is best for protecting local messages, files, and folders.

Cryptography is the science of information security. Examples of cryptography techniques include microdots, merging words with images, etc. However, cryptography is most often associated with encoding plaintext into ciphertext and then back again. People who practice this field are known as cryptographers. Cryptography mainly consists of four objectives:

  • Confidential: the information cannot be understood by anyone.
  • Integrity: Information cannot be altered during storage or transit between sender and receiver.
  • Non-repudiation – The creator or sender of the information cannot deny their intentions in creating or transmitting the information.
  • Authentication: the sender and receiver can identify each other's identity and the source or destination of the information.

2.2 DES (Data Encryption Standard) and Implementation

DES is the US government data encryption standard, a product encryption that operates on 64-bit blocks of data, using a 56-bit key. Triple DES is a product encryption that, like DES, operates on 64-bit blocks of data. There are several ways, each of which uses DES encryption 3 times. Some forms use two 56-bit keys and others use three. DES "modes of operation" can also be used with triple-DES.

Some people refer to E(K1,D(K2,E(K1,x))) as triple-DES. This method is designed to be used for DES and IV key encryption for ``Automated Key Distribution''. Its formal name is "Single Key Encryption and Decryption by a Key Pair". Others use the term ``triple-DES'' for E(K1,D(K2,E(K3,x))) or E(K1,E(K2,E(K3,x))).

Key encryption keys can be a single DEA key or a DEA key pair. Key pairs should be used when additional security is needed (eg, the data protected by the key(s) has a long security life). A key pair will not be encrypted or decrypted using a single key.

Protecting privacy using the symmetric DES algorithm (the government-sponsored data encryption standard) is relatively easy on small networks, requiring the exchange of secret encryption keys between each party. As a network proliferates, the secure exchange of secret keys becomes increasingly expensive and difficult to manage. Consequently, this solution by itself is not practical even for moderately large networks.

DES has an additional drawback, it requires sharing a secret key. Each person must trust the other to protect the couple's secret key and not reveal it to anyone. Since the user must have a different key for each person with whom she communicates, she must entrust each person with one of her secret keys. This means that in practical implementations, secure communication can only take place between people with some kind of prior relationship, be it personal or professional.

The fundamental problems that DES does not address are authentication and non-repudiation. Shared secret keys prevent either party from proving what the other might have done. Either one can surreptitiously modify the data and be sure that a third party will not be able to identify the culprit. The same key that allows you to communicate securely could be used to create forgeries in the other user's name.

2.3 Cryptographic schemes based on RSA

The RSA algorithm was invented by Ronald L. Rivest, Adi Shamir, and Leonard Adleman in 1977. There are a variety of different cryptographic protocols and schemes based on the RSA algorithm in products around the world. The RSAES-OAEP encryption scheme and the RSASSA-PSS signature scheme with stub are recommended for new applications.

2.3.1 RSAES-OAEP (RSA Encryption Scheme: Optimal Asymmetric Encryption Padding)

It is a public key encryption scheme that combines the RSA algorithm with the OAEP method. The inventors of OAEP are Mihir Bellare and Phillip Rogaway, with improvements by Don B. Johnson and Stephen M. Matyas.

2.3.2 RSASSA-PSS (RSA Signature Scheme with Appendix - Probabilistic

signature scheme)

It is an asymmetric signature scheme with stub that combines the RSA algorithm with the PSS encryption method. The inventors of the PSS coding method are Mihir Bellare and Phillip Rogaway. During the efforts to adopt RSASSA-PSS in the P1363a standards effort, Bellare and Rogaway as well as Burt Kaliski (the editor of IEEE P1363a) made certain adaptations to the original version of RSA-PSS to facilitate implementation and integration into protocols. existing.

Here is a small example of RSA Plaintexts are positive integers up to 2^{512}. The keys are quadruple (p,q,e,d), where p is a 256-bit prime, q is a 258-bit prime, and d and e are large numbers with (de - 1) divisible by (p-1)(q- 1 ). We define E_K(P) = P^e mod pq, D_K(C) = C^d mod pq. All quantities are easily computed from classical and modern number-theoretic algorithms (Euclid's algorithm for computing the greatest common factor produces an algorithm for the former, and recently explored historically-explored computational approaches to finding "probably" large prime numbers, like Fermat's test, provide the latter.)

Now E_K is easily computed from the pair (pq,e)---but, as far as is known, there is no easy way to compute D_K from the pair (pq,e). So whoever generates K can publish (pq,e). Anyone can send you a secret message; he is the only one who can read the messages.

The main advantage of RSA public-key cryptography is increased security and convenience. Private keys never need to be transmitted or revealed to anyone. In a secret key system, by contrast, the secret keys must be transmitted (either manually or through a communication channel), and there may be a chance that an enemy could discover the secret keys during their transmission.

2.4 JavaTMCryptographic architecture

The Java Security API is a new core Java API, built around thejava.securitypackage (and its subpackages). This API is designed to allow developers to embed high and low level security features into their Java applications. The first version of Java Security in JDK 1.1 contains a subset of this functionality, including APIs for digital signatures and message digests. In addition, there are abstract interfaces for key management, certificate management, and access control. Specific APIs to support X.509 v3 certificates and other certificate formats, and richer functionality in the area of ​​access control, will continue in later versions of the JDK.

Java Cryptography Extension (JCE) extends the JCA API to include encryption and key exchange. Together, JCA and JCA provide a complete, platform-independent crypto API. The JCE will be provided in a separate version because it cannot currently be exported outside of the United States.

Java's cryptographic architecture (JCA) was designed around these principles:

  • Deployment independence and interoperability
  • Algorithm independence and extensibility

Implementation independence and algorithm independence are complementary: their goal is to allow API users to use cryptography.concepts, such as digital signatures and message digests, without worrying about the implementations or even the algorithms that are used to implement these concepts. When complete algorithm independence is not possible, the JCA provides developers with standardized algorithm-specific APIs. When implementation independence is not desired, the JCA allows developers to indicate the specific implementations they require.

2.5 JavaTMCryptography Extension (JCE) 1.2.1

the javaTMCryptography Extension (JCE) 1.2.1 is a package that provides a framework and implementations for encryption, key generation and key agreement, and message authentication code (MAC) algorithms. Cipher support includes symmetric, asymmetric, block, and stream ciphers. The software also supports secure transmissions and sealed objects.

JCE 1.2.1 is designed so that other qualified cryptographic libraries can connect as service providers and new algorithms can be added seamlessly. (Qualified vendors include those approved for export and those certified for domestic use only. Qualified vendors are signed by a trusted entity.) JCE 1.2.1 complements the JavaTM2, which already includes interfaces and implementations of digital signatures and message digests.

This version of JCE is a non-commercial reference implementation that demonstrates a working example of the JCE 1.2.1 APIs. A reference implementation is similar to a proof-of-concept implementation of an API specification. It is used to demonstrate that the specification is implementable and that various compatibility tests can be written against it.

A non-commercial implementation generally lacks the overall integrity of a commercial-grade product. While the implementation is compliant with the API specifications, it will be missing things like a fully featured toolset, sophisticated debugging tools, commercial-grade documentation, and regular maintenance updates.

The Java 2 platform already has implementations and interfaces for digital signatures and message digests. JCE 1.2 was created to extend the Java Cryptography Architecture (JCA) APIs available on the Java 2 platform to include APIs and implementations for cryptographic services that were subject to US export control regulations. JCE 1.2 was released separately as an extension of the Java 2 platform, in accordance with US export control regulations.

Important features of JCE 1.2.1

  • Pure Java implementation.
  • Pluggable framework architecture that allows only qualified providers to connect.
  • Exportable (only in binary format).
  • Unique distribution of Sun Microsystems JCE 1.2.1 software to national and global users, with jurisdiction policy files specifying no restrictions on cryptographic strengths.

2.6 Conclusions

DES is available in software as a content encryption algorithm.Several people have made the DES code available via ftp. Stig Ostholm [FTPSO]; BSD[FTPBK]; Eric Young [FTPEY]; Dennis Furguson [FTPDF]; Mark Riordan [FTPMR]; Phil Karn [FTPPK]. Patterson [PAT87] also provides a Pascal list of DES. Antti Louko <[email protected]> has written a version of DES with BigNum packages in [FTPAL]. Thus, we can get the DES algorithm and use it to encrypt our admin password, user passwords, and server database.

The RSA algorithm is a very popular encryption algorithm. There are collections of links to RSA-related documents on the Internet. There are a variety of different cryptographic schemes and protocols based on the RSA algorithm in products around the world. The most recommended RSA algorithm is the RSAES-OAEP encryption scheme and the RSASSA-PSS signature scheme. We will analyze your encryption method and find a suitable option for our server database encryption.

"Java Cryptographic Architecture" (JCA) refers to the framework for accessing and developing cryptographic functionality for the Java Platform. It covers the cryptography-related parts of the JDK 1.1 Java Security API (currently almost the entire API), as well as a set of conventions and specifications provided in this document. It introduces a "provider" architecture that allows for multiple and interoperable cryptographic implementations.

the javaTMCryptography Extension (JCE) 1.2.1 is used to provide better encryption to the system. This package requires JavaTM2 SDK v 1.2.1 o posterior o JavaTM2 Runtime Environment v 1.2.1 or later already installed. It is an extension to the Java Cryptography Architecture (JCA) APIs available on the Java 2 platform. In our project topic, lab computer access control in a client server environment also requires encryption for the database administrator and smart card. Therefore, we study JCE 1.2.1 and make use of its cryptographic strengths.

Chapter 3: Smart Card Technology

3.1. About Java card technology

In the Java card specifications, enable JavaTMtechnology to run on smart cards and other devices with limited memory. The Java Card API also enables applications written for one Java Card technology-enabled smart card platform to run on any other such platform. From these two new technologies, Java smart card technology becomes more efficient to use.

The Java Card Application Environment (JCAE) is licensed on an OEM basis to smart card manufacturers, representing more than 90 percent of global smart card manufacturing capacity.

There are several unique benefits of Java Card technology, such as:

  • Independent platform- Java Card technology applet that conforms to the Java Card API specification will run on cards developed with JCAE, allowing developers to use the same Java Card technology applet to run on cards from different vendors.
  • Capable of multiple applications- Multiple applications can be run on a single card. In the Java programming language, the inherent design around small, downloadable code elements makes it easy to run multiple applications securely on a single card.
  • After the issuance of applications- The installation of applications, once the card is issued, gives card issuers the ability to dynamically respond to the changing needs of their customers. For example, if a customer decides to change the frequent flyer program associated with the card, the card issuer can make this change, without having to issue a new card.
  • Flexible- The Object Oriented methodology of Java Card technology provides flexibility in smart card programming.
  • Compatible with existing smart card standards- The Java Card API is compliant with formal international standards, such as ISO7816, and industry-specific standards, such as Europay/Master Card/Visa (EMV).

3.2. What are smart cards and their benefits?

Smart card a card similar in size to today's plastic card, which has an embedded chip. By adding a chip to the card, it becomes a smart card with the power to serve many different uses. As an access control device, smart cards make personal and business data available only to the appropriate users. Another app gives users the ability to make a purchase or exchange value. Smart cards provide data portability, security, and convenience. There are 2 types of smart cards: "smart" smart cards contain a central processing unit - a CPU, which actually has the ability to store and secure information and "make decisions" as required by the issuer's specific application needs of the card. The cards offer a "read/write" capability, new information can be added and processed.

The second type is called a memory card. Memory cards are primarily information storage cards, containing stored value, that the user can spend" in telephone, retail, sales, or related transactions. The intelligence of the integrated circuit chip in both types of cards enables them to protect the information stored from damage or theft.For this reason, smart cards are much more secure than cards with a magnetic stripe, which carry the information on the outside of the card and can be easily copied.There are also contactless smart cards.Cards Contactless smart cards do not require contact smart card reading, but are recognized by a contactless smart card terminal that needs to be nearby.

The technology is already in place to allow consumers to combine services such as their credit cards, long distance services and ATM cards on one card. Smart cards can also perform other functions, such as providing security for Internet users and allowing travelers to check in at hotels. Smart cards provide data portability, security, and convenience. Smart cards help businesses evolve and expand their products and services in a changing global marketplace. Banks, telecommunications companies, software and hardware companies, and airlines have the opportunity to tailor their card products and services to better differentiate their offerings and brands. ...


Top Articles
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated: 24/10/2023

Views: 5951

Rating: 4.3 / 5 (74 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.