Results 1 to 5 of 5

Thread: state-of-the-art facial recognition technology

  1. Header
  2. Header-68

BLiNC Magazine, always served unfiltered
  1. #1

    state-of-the-art facial recognition technology

    Will Face Recognition Ever Capture Criminals?

    Despite thousands of cameras on the scene, the Boston Marathon bombers weren’t caught by face recognition technology

    By Steven Cherry Posted 24 May 2013 | 14:30 GMT









    00:00

    00:00

















    Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
    The technologies of face recognition have come a long way, but they were no help in finding the Boston Marathon bombers. In fact, by various accounts, the authorities didn’t even try, even though there were millions of images captured in Boston that day by closed-circuit TV systems at stores, banks, street intersections, and by spectators’ smartphones, cameras, and video cams.
    What’s wrong with face recognition, and when will it finally help us identify and apprehend criminal suspects? On the other hand, when it gets that good, will it turn on its masters and be used to diminish the privacy and security of lawful citizens?
    My guest today is James Wayman. He’s the former director of the National Biometric Test Center at San Jose State University and is now an administrator in its Office of Graduate Studies and Research. He holds four patents in speech processing and has helped develop national and international standards in biometrics. He joins us by phone.
    Jim, welcome to the podcast.
    James Wayman: Well, thank you very much. I appreciate being here.
    Steven Cherry: News outlets reported that the images captured in Boston were of too poor a quality to be compared to a photo database. But as I understand it, that didn’t even matter. The FBI isn’t even set up to match individual photos against a database against them. Why is that?
    James Wayman: Well, so, you’re exactly right. Why do images need to be high quality? Well, the state of the art, where we are in biometric facial-recognition matching, is that we do a very good job if we have full frontal facial images that are evenly lit, with a high amount of resolution, meaning a whole lot of pixels—hopefully at least 90 pixels between the eyes—and we have a completely uncluttered background. In fact, the standard refers to an 18 percent grayscale nonreflective background. So that’s the technology we’re fundamentally dealing with.
    Secondly, as you point out, the FBI’s not even set up now to try to compare faces with that level of quality and resolution. Now, the FBI has announced that starting next year, they’re going to have a pilot project that will allow them to compare mug shots. Mug shots aren’t quite to the resolution level that I just mentioned. In other words, if you look at a mug shot coming out of a police department, it will not have an 18 percent nonreflective grayscale background. You’ll see all kinds of stuff in the background.
    So the FBI’s saying, well, maybe starting next year we can have a pilot project that will allow us to compare mug shots, even though the quality of most mug shots is not real good.
    Steven Cherry: Just this business of image quality, I guess that’s what prevented comparing the suspects’ photographs to, say, the Massachusetts driver’s license database, where I guess they both had driver’s licenses.
    So let’s talk about how this works. There are a lot of strategies for comparing two facial images. The National Institute of Standards and Technology [NIST] has held some competitions and has a grand challenge for facial recognition. What seems to be working the best right now?
    James Wayman: Yeah, I don’t want people to think somehow we’re finding the distance between the nose and the eyes and the nose and the mouth, because we can’t even find the mouth. There’s something down there. But we can find the eyes. The eyes, we got really lucky. God gave us eyes that have a dark-colored pupil against a white-colored sclera, and those are pretty distinctive. If your eyes are open, we can find those, and we can find the eye centers pretty well. But noses, not so much, and mouths certainly not. Mouths move too much, or mine does. And your chin kind of seems to fade into your neck. Remember, these images are all black and white.
    Okay, so, let’s start historically with the technology. It was developed in the early 1960s by a fellow named Woodrow W. Bledsoe, who I believe was an IEEE member. He later retired at the University of Texas at Austin. And what he was doing was marking facial images by hand—the centers of the eyes, the corners of the eyes, the corners of the lips, and the like. And then he projected these marks onto a sphere and he rotated the sphere, trying to get marks from two different images to line up, at which point he could say, aha, these are from the same person.
    Well, all of this hand marking didn’t work so well, and in the late 1980s, Sirovich and Kirby came out with this very simplistic idea that is so simple it sounds like it’ll never possibly work, but it did. And that is, we’re going to project the entire face image onto a series of filters. The filters themselves will be derived from a PCA [principal component analysis] to composition of a set of vectors created by another group of facial images.
    Well, that approach didn’t work all that great. And one of the reasons is these filters are global filters, meaning all over each one of these basis functions, you have nonzero values. What that means is if someone changes their mouth, for instance, it impacts every single one of the projection coefficients—every one of them. Oh, that’s terrible.
    So in, I think, about 1996, you had a Rockefeller University professor, says we’re going to fix that. And what we’re going to use for our basis vectors onto which we project these faces, we’re going to use what we call “local basis vectors,” meaning most of the basis vector is zero. So if you smile between the two pictures, one picture’s smiling and one picture’s not smiling, maybe only one or two of the coefficients in the representation is going to change. He called this “local feature analysis” because each one of these basis vectors only had a localized nonzero region. And that worked really, really well. And, in fact, that took us into the 2000s.
    And then in 2000, under funding from the Office of Naval Research, a whole new approach was developed. And that was, what we’re going to do, is we’re going to take simply very, very small filters, technically speaking, Gabor filters, and we’re going to draw a grid on the face, and every place where the grid, this checkerboard, crosses in the face, we’re going to put down a series of Gabor filters, small Gabor filters on that area of the face, and we’re going to find out what coefficients we get out.
    Then the next advance, that came maybe just five or six years ago, was to try to tie the grid to actual landmarks on the face. Now, we can’t find the nose exactly and the mouth exactly, but we’ve done a very, very good job in the last 10 years of finding eye centers pretty exactly. And for most people, the nose is midway between the two eye centers and down. I say for most people, because there are people that eye centers are not horizontally aligned, so that’s one failure mode. But for most people, we can guess where the nose might be, and we might look for changes in the black-white pattern between the eyes and down that would indicate, yeah, that’s sort of a nose, and if we go below that, we should get the mouth.
    And then what you can do is, because facial expression changes, the illumination of the face changes, the pose angle of the face changes, you can warp these grids around a little bit to try to get the Gabor filter coefficients of two facial images to match up. And if you can get the coefficients to match up, you say, “Aha, this may be the same face.” If, despite your attempts at warping, you can’t get the facial image coefficients to match, you say, “Well, it’s probably not the same guy.”
    So one more approach we need to talk about, and that’s the one that you might have thought of originally, and that is the local correlation. Maybe we can just take small face patches of one face and place it over another face and see if they kind of correlate and match up.
    Now, all these methods are available, and I understand now from the facial-recognition companies that, depending on the resolution of the image available, they can actually apply all four methods simultaneously to determine the degree of correspondence, the degree of similarity between two facial images.
    Steven Cherry: Good. So, I guess another problem holding back face recognition in law-enforcement situations is on the database side, right? The quality of those images. And then there’s yet another problem, also on the database side. It’s the too-much-of-a-good-thing problem, right? It’s impractical to compare an image against, say, every photo in Facebook, even though the images there are mostly pretty good images.
    James Wayman: And you’re leaving out a third impediment, and that is legislative. For instance, I don’t know what authority the FBI would even have to access the driver’s license images from the State of California. I guarantee that they do not have authority to access the facial images stored in our social service welfare database.
    Steven Cherry: Well, let’s suppose that weren’t an impediment, and I believe that it is an impediment now, and that there are efforts to remove that impediment. So let’s just talk about the practical matter of comparing a single image against a database of millions of photographs, say.
    James Wayman: Okay. Well, I mean, it’s the obvious probabilistic problem, and that is, even minuscule false positive rates result in a few false positives over a very large database, right? So now, suppose the person you’re looking for actually is in the database. You get back that person’s face mixed with all the false positives. Suppose that person you’re looking for is not in the facial database. You still get about the same number of false positives. So, you spend most of your time looking at the false positives.
    Steven Cherry: Right, which is something that sometimes happens for the FBI, right? They have to track down a thousand leads and one of them proves to be correct.
    James Wayman: I suppose, but that’s not how the FBI does it. I mean, that’s really an impractical way to approach things. There’s a saying in this community that “one word is worth a thousand pictures.” You don’t have to look through a thousand pictures; that’s ridiculous. You want to just find the word. The word maybe is the guy’s driver’s license number or the guy’s address or the guy’s passport number or maybe even his name or something like that. And then get that, find that first. That may be a whole lot easier. And that way you don’t have to cull through all those pictures.
    Steven Cherry: Now, what about the computational problem? How much time does it take to compare two photographs?
    James Wayman: There’s an easy answer to that, and I’m sure it’s in the NIST test reports. I just don’t remember it. I mean, these numbers are commonly published, and they just go in one eye and out the other in me. I just can’t tell you. You know, it’s on the order of milliseconds, I’m sure. And, you know, you can parallelize that, right? And so you can have multiple computers. That’s not the issue. The issue is not the computational time. That can be handled through parallel computing.
    Steven Cherry: We’ve seen a lot of areas where technology seems to be making very little progress for years, and then suddenly it takes off, right? Self-driving cars went—you should pardon the pun—from 0 to 60 in just the last few years, language translation, voice recognition. Do you think that’s likely to happen with face recognition?
    James Wayman: Well, I don’t know that I accept the fundamental premise. Voice-recognition work, this is the work I was doing in the ’80s, both speech- and speaker-recognition work has progressed pretty uniformly for the 30 years I’ve been involved, meaning that it did get to a level where people could actually start using it, maybe a couple of years ago when Apple came out with Siri. It may just rise to the level where people can start using it. That doesn’t mean that the progress has in any way been uneven.
    Now, I would say with regard to facial-recognition technology, the government dumped a ton of money into this technology after 9/11. And I worked for the government, helping them spend some of that money. I didn’t receive the money myself; I helped them allocate money to universities to do the research. And so when the money went into the technology, the technology improved greatly.
    Right now, of course, we’ve cut back on our R&D money. The technology will not be improving as rapidly in the coming years, but it takes a while. There’s a phase lag there. It’ll take a while for us to figure that out, that the technology improved very, very rapidly in the 2000s and did not improve as rapidly in our decade because the amount of money being spent was minuscule compared to the previous decade.
    Steven Cherry: Eventually, at some point, maybe a few years and maybe longer, but at some point this stuff is going to be really fantastic. And at that point, are we going to start to worry about incursions of our privacy, being too readily identified, and are we going to start regretting all those millions of photos we’ve put on Facebook, for example?
    James Wayman: I think it’s interesting you should bring that question up in the context of biometrics. I mean, don’t we already have that problem? People carry around these personal transmitter devices called cellphones, right? And those numbers are pretty identifying. Nobody but me carries my cellphone, and the cellphone transmits, however many seconds, its phone number to whatever tower is hanging around. And the potential for invading my privacy is much, much stronger with things like my cellphone or my Facebook account or my e-mail account than it is for using facial-recognition and surveillance applications. For me, that’s a nonstarter with regard to privacy. That’s not the issue; the issue is things like cellphones.
    Steven Cherry: Fair enough. Well, Jim, it’s a potentially fabulously useful technology, and I guess maybe that is fearfully so, as I might have thought, given what you have to say about cellphones. So, thanks for joining us today and telling us about it.
    James Wayman: Well, thank you very much. I enjoyed talking to you.
    Steven Cherry: We’ve been speaking with biometric researcher Jim Wayman about the current limitations—and future prospects—of face recognition.
    For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
    This interview was recorded Thursday, 16 May 2013.

  2. #2

    Re: state-of-the-art facial recognition technology

    Lady Liberty’s Watching You
    I wanted to write about face-recognition software considered for use at the statue. Here’s what happened.

    By Ryan Gallagher|Posted Monday, April 29, 2013, at 5:45 AM


    The Statue of Liberty is upgrading its surveillance technology. Just don’t mention face recognition.

    Name:  130412_FT_LIBERTY.jpg
Views: 278
Size:  19.2 KB
    Photo by Lucas Jackson/Reuters

    The Statue of Liberty is getting a facelift, though the changes aren’t only cosmetic. An upgraded "state of the art" security system will help keep Lady Liberty safe when it reopens soon. But what does the system entail, and could it involve a controversial new face-recognition technology that can detect visitors’ ethnicity from a distance? I tried to find out—and a New York surveillance company tried to stop me.

    Face recognition was first implemented at the Statue of Liberty in 2002 as part of an attempt to spot suspected terrorists whose mug shots were stored on a federal database. At the time, the initiative was lambasted by the American Civil Liberties Union, which said it was so ineffective that “Osama Bin Laden himself” could easily dodge it.

    But the technology has advanced since then: Late last year, trade magazine Police Product Insight reported that a trial of the latest face-recognition software was being planned at the Statue of Liberty for the end of 2012 to “help law enforcement and intelligence agencies spot suspicious activity.” New York surveillance camera contractor Total Recall Corp. was quoted as having told the magazine that it was set for trial at the famed tourist attraction software called FaceVACS, made by German firm Cognitec. FaceVACS, Cognitec boasts in marketing materials, can guess ethnicity based on a person’s skin color, flag suspects on watch lists, estimate the age of a person, detect gender, “track” faces in real time, and help identify suspects if they have tried to evade detection by putting on glasses, growing a beard, or changing their hairstyle. Some versions of face-recognition software used today remain ineffective, as investigators found in the aftermath of the Boston bombings. But Cognitec claims its latest technology has a far higher accuracy rating—and is certainly more advanced than the earlier versions of face-recognition software, like the kind used at the Statue of Liberty back in 2002. (It is not clear whether the face-recognition technology remained in use at the statue after 2002.)
    Advertisement

    Liberty Island took such a severe battering during Sandy that it has stayed closed to the public ever since—thwarting the prospect of a pilot of the new software. But the statue, which attracts more than 3 million visitors annually according to estimates, is finally due to open again on July 4. In March, Statue of Liberty superintendent Dave Luchsinger told me that plans were underway to install an upgraded surveillance system in time for the reopening. “We are moving forward with the proposal that Total Recall has come up with,” he said, adding that “[new] systems are going in, and I know they are state of the art.” When it came to my questions about face recognition, though, things started to get murky. Was that particular project back on track? “We do work with Cognitec, but right now because of what happened with Sandy it put a lot of different pilots that we are doing on hold,” Peter Millius, Total Recall’s director of business development, said in a phone call. “It’s still months away, and the facial recognition right now is not going to be part of this phase.” Then, he put me on hold and came back a few minutes later with a different position—insisting that the face-recognition project had in fact been “vetoed” by the Park Police and adding that I was “not authorized” to write about it.

    That was weird, but it soon got weirder. About an hour after I spoke with Total Recall, an email from Cognitec landed in my inbox. It was from the company’s marketing manager, Elke Oberg, who had just one day earlier told me in a phone interview that “yes, they are going to try out our technology there” in response to questions about a face-recognition pilot at the statue. Now, Oberg had sent a letter ordering me to “refrain from publishing any information about the use of face recognition at the Statue of Liberty.” It said that I had “false information,” that the project had been “cancelled,” and that if I wrote about it, there would be “legal action.” Total Recall then separately sent me an almost identical letter—warning me not to write “any information about Total Recall and the Statue of Liberty or the use of face recognition at the Statue of Liberty.” Both companies declined further requests for comment, and Millius at Total Recall even threatened to take legal action against me personally if I continued to “harass” him with additional questions.

    Linda Friar, a National Park Service spokeswoman, confirmed that the procurement process for security screening equipment is ongoing, but she refused to comment on whether the camera surveillance system inside the statue was being upgraded on the grounds that it was “sensitive information.” So will there be a trial of new face-recognition software—or did the Park Police “cancel” or “veto” this? It would probably be easier to squeeze blood from a stone than to obtain answers to those questions. “I’m not going to show my hand as far as what security technologies we have,” Greg Norman, Park Police captain at Liberty Island, said in a brief phone interview.

    The great irony here, of course, is that this is a story about a statue that stands to represent freedom and democracy in the modern world. Yet at the heart of it are corporations issuing crude threats in an attempt to stifle legitimate journalism—and by extension dictate what citizens can and cannot know about the potential use of contentious surveillance tools used to monitor them as they visit that very statue. Whether Cognitec's ethnicity-detecting face recognition software will eventually be implemented at Lady Liberty remains to be seen. What is certain, however, is that the attempt to silence reporting on the mere prospect of it is part of an alarming wider trend to curtail discussion about new security technologies that are (re)shaping society.

    This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture.

  3. #3

    Re: state-of-the-art facial recognition technology

    FBI begins installation of $1 billion face recognition system across America

    Published time: September 07, 2012 20:38

    Name:  logo-fbi.si.jpg
Views: 221
Size:  36.4 KB
    Birthmarks, be damned: the FBI has officially started rolling out a state-of-the-art face recognition project that will assist in their effort to accumulate and archive information about each and every American at a cost of a billion dollars.

    The Federal Bureau of Investigation has reached a milestone in the development of their Next Generation Identification (NGI) program and is now implementing the intelligence database in unidentified locales across the country, New Scientist reports in an article this week. The FBI first outlined the project back in 2005, explaining to the Justice Department in an August 2006 document (.pdf http://www.justice.gov/jmd/2008justi...00/fbi_ngi.pdf) that their new system will eventually serve as an upgrade to the current Integrated Automated Fingerprint Identification System (IAFIS) that keeps track of citizens with criminal records across America .

    “The NGI Program is a compilation of initiatives that will either improve or expand existing biometric identification services,” its administrator explained to the Department of Justice at the time, adding that the project, “will accommodate increased information processing and sharing demands in support of anti-terrorism.”

    “The NGI Program Office mission is to reduce terrorist and criminal activities by improving and expanding biometric identification and criminal history information services through research, evaluation and implementation of advanced technology within the IAFIS environment.”

    The agency insists, “As a result of the NGI initiatives, the FBI will be able to provide services to enhance interoperability between stakeholders at all levels of government, including local, state, federal, and international partners.” In doing as such, though, the government is now going ahead with linking a database of images and personally identifiable information of anyone in their records with departments around the world thanks to technology that makes fingerprint tracking seem like kids' stuff.

    According to their 2006 report, the NGI program utilizes “specialized requirements in the Latent Services, Facial Recognition and Multi-modal Biometrics areas” that “will allow the FnewBI to establish a terrorist fingerprint identification system that is compatible with other systems; increase the accessibility and number of the IAFIS terrorist fingerprint records; and provide latent palm print search capabilities.”

    Is that just all, though? During a 2010 presentation (.pdf) made by the FBI’s Biometric Center of Intelligence, the agency identified why facial recognition technology needs to be embraced. Specifically, the FBI said that the technology could be used for “Identifying subjects in public datasets,” as well as “conducting automated surveillance at lookout locations” and “tracking subject movements,” meaning NGI is more than just a database of mug shots mixed up with fingerprints — the FBI has admitted that this their intent with the technology surpasses just searching for criminals but includes spectacular surveillance capabilities. Together, it’s a system unheard of outside of science fiction.

    New Scientist reports that a 2010 study found technology used by NGI to be accurate in picking out suspects from a pool of 1.6 million mug shots 92 percent of the time. The system was tested on a trial basis in the state of Michigan earlier this year, and has already been cleared for pilot runs in Washington, Florida and North Carolina. Now according to this week’s New Scientist report, the full rollout of the program has begun and the FBI expects its intelligence infrastructure to be in place across the United States by 2014.

    In 2008, the FBI announced that it awarded Lockheed Martin Transportation and Security Solutions, one of the Defense Department’s most favored contractors, with the authorization to design, develop, test and deploy the NGI System. Thomas E. Bush III, the former FBI agent who helped develop the NGI's system requirements, tells NextGov.com, "The idea was to be able to plug and play with these identifiers and biometrics." With those items being collected without much oversight being admitted, though, putting the personal facts pertaining to millions of Americans into the hands of some playful Pentagon staffers only begins to open up civil liberties issues.

    Jim Harper, director of information policy at the Cato Institute, adds to NextGov that investigators pair facial recognition technology with publically available social networks in order to build bigger profiles. Facial recognition "is more accurate with a Google or a Facebook, because they will have anywhere from a half-dozen to a dozen pictures of an individual, whereas I imagine the FBI has one or two mug shots," he says. When these files are then fed to law enforcement agencies on local, federal and international levels, intelligence databases that include everything from close-ups of eyeballs and irises to online interests could be shared among offices.

    The FBI expects the NGI system to include as many as 14 million photographs by the time the project is in full swing in only two years, but the pace of technology and the new connections constantly created by law enforcement agencies could allow for a database that dwarfs that estimate. As RT reported earlier this week, the city of Los Angeles now considers photography in public space “suspicious,” and authorizes LAPD officers to file reports if they have reason to believe a suspect is up to no good. Those reports, which may not necessarily involve any arrests, crimes, charges or even interviews with the suspect, can then be filed, analyzed, stored and shared with federal and local agencies connected across the country to massive data fusion centers. Similarly, live video transmissions from thousands of surveillance cameras across the country are believed to be sent to the same fusion centers as part of TrapWire, a global eye-in-the-sky endeavor that RT first exposed earlier this year.

    “Facial recognition creates acute privacy concerns that fingerprints do not,” US Senator Al Franken (D-Minnesota) told the Senate Judiciary Committee’s subcommittee on privacy, technology and the law earlier this year. “Once someone has your faceprint, they can get your name, they can find your social networking account and they can find and track you in the street, in the stores you visit, the government buildings you enter, and the photos your friends post online.”

    In his own testimony, Carnegie Mellon University Professor Alessandro Acquisti said to Sen. Franken, “the convergence of face recognition, online social networks and data mining has made it possible to use publicly available data and inexpensive technologies to produce sensitive inferences merely starting from an anonymous face.”

    “Face recognition, like other information technologies, can be source of both benefits and costs to society and its individual members,” Prof. Acquisti added. “However, the combination of face recognition, social networks data and data mining can significant undermine our current notions and expectations of privacy and anonymity.”

    With the latest report suggesting the NGI program is now a reality in America, though, it might be too late to try and keep the FBI from interfering with seemingly every aspect of life in the US, both private and public. As of July 18, 2012, the FBI reports, “The NGI program … is on scope, on schedule, on cost, and 60 percent deployed.”

  4. #4

    Re: state-of-the-art facial recognition technology

    now they have state-of-the-art facial recognition technology, but there is not enough data base for the facial recognition server so where they go to fill the void, Facebook of course every time you the Facebook recognize face they want you to tag and let every one know who it is thank you very much and FBI thank you too. goggle mine's face on facebook, picasa and all other photo site. match face to name and address phone number, they have long way to go but with all your help they are getting there. on facebook or any other site once you have input information abut you address, phone number etc. it never be erased from there server, they give you choice to remove or change yes that just on your end, you can see the change but on there server all the information is saved for ever(hard driver is cheap now days).





    Last edited by airdog07; March 16th, 2015 at 08:16 PM.

  5. #5

    Re: state-of-the-art facial recognition technology

    Biz Break: Facebook rewrites privacy policies to be more clear about intense use of your data

    By Jeremy C. Owens

    jowens@mercurynews.com
    Posted: 08/29/2013 03:15:22 PM PDT

    Name:  20130827__0828facebook~2.JPG
Views: 103
Size:  36.9 KB (KAREN BLEIER/AFP/GETTY IMAGES)

    Today: Facebook announces that it is rewriting its privacy policies in wake of "Sponsored Stories" settlement, seeks to begin using profile photos for facial recognition. Also: PC stocks take a hit from new report, but Silicon Valley has a strong day on Wall Street.

    The Lead: Facebook looks to rewrite privacy policy, use profile pics for tagging

    Facebook announced Thursday that it is planning to update its primary policies regarding use of users' data, seeking to clarify its intense usage of information collected from more than 1 billion global members while also extending its controversial photo-identification system.

    The main thrust of the proposed update to the social network's data-use policy is to be much clearer about how Facebook uses user data, which can be summed up as "in every way possible."

    "We want to be really, really clear that whenever you give us information, we're going to take it," Chief Privacy Officer Erin Egan told AllThingsD writer Mike Isaac.

    Much of the blog post detailing the update focuses on the combination of advertising and Facebook members' social-media usage, a sticking point since the company faced a class-action suit from users whose names and photos were used in so-called Sponsored Stories. Menlo Park-based Facebook agreed to pay out $20 million in a settlement of that case, which was wrapped up this week when a judge gave his final OK to the agreement, which could pay as little as 2 cents to affected members.

    "We revised our explanation of how things like your name, profile picture and content may be used in connection with ads or commercial content to make it clear that you are granting Facebook permission for this use when you use our services," Egan wrote in Thursday's blog post.

    Facebook requests feedback about the changes, but no longer is bound to abiding by it, after changing its rules last year among a host of privacy changes. Previously, Facebook had to put privacy changes up to a vote of its members if more than 7,000 people commented on a proposed change, but the subsequent vote needed to attract 30 percent of Facebook users' input to be considered binding, nearly impossible to attain as the Menlo Park company's offering has exploded globally.

    As journalists and privacy advocates begin to tear through the massive changes, the first change to draw attention regards Facebook's already-controversial facial-recognition technology. The offering, which seeks to identify users in photos that are uploaded to the service, created an uproar in Europe, where Facebook eventually shut off the service and deleted its trove of information.

    In areas where the technology is still used, Facebook is seeking to allow users' profile pictures to be used as part of the facial-recogntion system, instead of relying solely on previously tagged photographs. Facebook's goal with the change is to make sure that photos are tagged so users are sure to find out when other users are uploading pictures of them to the social network.

    "Our goal is to facilitate tagging so that people know when there are photos of them on our service," Egan told Reuters.

    While Facebook no longer is mandated to following users' demands in its privacy policies, it is still under close scrutiny from the federal government, after a 2011 deal with the Federal Trade Commission sparked by allegations of deception and sharing users' personal information without consent. As a result of that agreement, Facebook is subject to independent audits of its privacy practices for two decades; Egan reported recently that its first audit confirmed "that the controls set out in our privacy program are working as intended," but "also helped us identify areas to work on as Facebook continues to evolve as a company."

    Facebook's attempt to open all possible avenues for advertising is part of a push to monetize its popularity after a record-breaking initial public offering valued the company at more than $100 billion. More than a year after its IPO, Facebook recently passed the $100 billion level in market capitalization, after its quarterly earnings report showed strong gains in mobile advertising revenue and investors pushed the stock higher.

    Facebook shares gained 1.8 percent to $41.28 Thursday, a record closing high that again pushed the company's market cap north of $100 billion.

    "now we are making it clear that moving forward we also want to use your profile photos as an additional input into the technology to better recognize you", no need for tagging any more computer well do it for you......

Similar Threads

  1. Speedflying Video: UFOs NUCLEAR TECHNOLOGY: ANTI-GRAVITY BELL
    By blinc in forum Speed Flying, Speed Riding, Paragliding News
    Replies: 0
    Last Post: September 18th, 2012, 10:50 AM
  2. Replies: 0
    Last Post: October 15th, 2011, 02:00 PM
  3. Advancement of Technology
    By mknutson in forum BASEWiki
    Replies: 0
    Last Post: May 7th, 2009, 07:44 AM
  4. Replies: 0
    Last Post: December 1st, 2008, 05:24 AM
  5. New technology
    By dexter in forum The 'Original' BASE Board
    Replies: 3
    Last Post: October 2nd, 2003, 10:52 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •