PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 26th of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:28:16] <a|3xxx> can somebody please explain the madness with mongodb user authentication
[00:29:35] <LouisT> what madness is it?
[00:29:57] <a|3xxx> every version its changing
[00:30:02] <a|3xxx> its like hard to keep up
[00:31:07] <a|3xxx> so this new version i upgraded, and now my code doesn't work
[00:31:35] <a|3xxx> i use the copydb command, once it completes, my user doesn't have access to that new database
[01:59:07] <a|3xxx> so i am trying to use db.createUser() but it tells me no such cmd
[02:03:56] <a|3xxx> it seems the migration from 2.4 didn't migrate permissions correctly
[02:05:26] <LouisT> a|3xxx: i had the same issue
[02:05:41] <LouisT> you'll have to log in with terminal and issue a db upgrade command
[02:05:41] <LouisT> sec
[02:06:44] <LouisT> a|3xxx: http://docs.mongodb.org/manual/release-notes/2.6-upgrade-authorization/
[02:07:04] <a|3xxx> that didn't work
[02:07:33] <LouisT> it should, what did you do?
[02:07:53] <a|3xxx> i mean it returned ok but it didn't solve my problem
[02:08:23] <LouisT> remember, you have to put mongodb in no auth, then connect with "mongo", then run: db.getSiblingDB("admin").runCommand({authSchemaUpgrade: 1 });
[02:09:04] <a|3xxx> hm didn't do that first step
[02:26:13] <s2013> anyone here?
[02:33:24] <LouisT> hello
[02:33:31] <LouisT> a|3xxx: did that fix it?
[02:33:56] <s2013> LouisT, is there a limit to the # of items you can import using mongoimport?
[02:34:07] <LouisT> oh
[02:34:09] <LouisT> no idea
[02:34:15] <LouisT> i'm sure it's limited by db size
[02:34:24] <LouisT> but i'm not sure about the actual number limit
[02:34:49] <s2013> hmm
[02:37:35] <LouisT> from what i can tell it has a 16MB line limit, but i'm not completely sure about your question
[02:37:58] <s2013> thats per document tho
[02:38:00] <s2013> isnt it?
[02:38:03] <s2013> isnt a document = a row
[02:38:06] <s2013> im new to nosql
[02:43:18] <LouisT> not exactly sure, is there a reason you're using mongoexport/mongoimport instead of mongodump/mongorestore?
[02:44:40] <s2013> i have no idea
[02:44:48] <s2013> shoudl i use mongorestore? we exported data from parse.com
[02:45:06] <s2013> it sbunch of collections.. is there anyway to mass restore them?
[02:45:09] <s2013> instead of one at a time
[02:45:23] <s2013> the size of the collection files range from few kb to around 20 gigs
[02:45:47] <LouisT> O_o
[02:46:03] <LouisT> i'm not sure if mongorestore can restore from mongoexport
[02:46:19] <LouisT> dump and restore are reportedly a better option
[02:47:06] <LouisT> http://docs.mongodb.org/manual/reference/program/mongorestore/
[02:47:22] <s2013> https://gist.github.com/ss2k/611f45af83fdcfe4baa9 this is how the format looks like
[02:47:34] <s2013> the { "record"} is basicaly just placeholder for the actual object
[02:48:01] <s2013> i removed the results part so it was just a bunch of records and it imported one collection but says its too large for others
[02:48:11] <LouisT> yea that's json
[02:48:19] <LouisT> mongorestore does bson
[02:49:01] <LouisT> can you get the export as bson from parse?
[02:49:03] <s2013> so how would yourecommend ? i can write a script that does each one individually but im sure thats the longest way to do it
[02:49:09] <s2013> doesnt bson have limit of 16 megs?
[02:49:56] <LouisT> 16MB for each line of data i think it was
[02:50:16] <s2013> well no.. parswe has shitty support. in fact it has nos upport.. we pay $2k/mo to them for no support
[02:50:20] <s2013> thats why we are building our own api
[02:50:32] <s2013> it only has an export option but thats it.. it just sends you link to a zip file
[02:56:34] <a|3xxx> LouisT, no, there is some wierd stuff going on, i need to figure out this new auth scheme
[05:09:32] <deanclkclk_> folks question
[05:09:45] <deanclkclk_> I have 2 mongod instances running on seperate vms
[05:09:58] <deanclkclk_> one of my mongod instances have data
[05:10:14] <deanclkclk_> how can I do a dump and restore it on the second db?
[08:02:54] <navneet894> Hello, Iam Navneet Mittal from IIT JODHPUR , i know c++ and want to contribute to mongodb. Iam new to open source and need guidance.
[08:10:24] <Bilge> lol
[08:10:55] <Bilge> What do you want to contribute? Aids?
[08:16:02] <navneet894> i just want to give a start to open source contribution? want to know what i can contribute?
[12:16:41] <mn3monic> hello, about db.collection.insert({'sample_key': 'sample_value'}), how I specify that 'sample_key' must be unique without quering at every single insertion to check if it's already in the db?
[12:20:41] <mn3monic> the only way I've found is:
[12:20:42] <mn3monic> if not db.collection.find_one({'sample_key': 'sample_value'}): db.collection.insert({'sample_key': 'sample_value', 'second_key': 'second_value', ...})
[12:20:59] <mn3monic> but, in that way, I do 2 queries for each entry
[12:21:11] <mn3monic> is this the best practice ?
[12:22:28] <Derick> mn3monic: you can set a unique key on sample_key
[12:23:08] <brammator> from collection {name, id, lastupdate}, and list of names plus timestamp. Could I have three results: "names not in collection; names in collection but lastupdate < timestamp; names not in collection at all" in one request?
[12:23:59] <brammator> Or I shoud make two requests and make third list in my script?
[12:29:20] <dawik> mn3monic: i believe if you try to insert something with the same key
[12:29:23] <dawik> you will get an error
[12:29:40] <dawik> you need "upsert" for it to go through
[13:24:21] <obiwahn> can i have mongd and s on the same server?
[13:24:26] <obiwahn> mongo
[15:11:16] <brammator> I have to cache some API requests. What's better: use compound key{host, path, params} or simple key with (host,key,params) converted to string?
[15:47:07] <DarkLinkXXXX> Can mongoimport work with just any json file, or does it require something mongo-specific?
[15:48:31] <kali> DarkLinkXXXX: by default, it expects one json object exactly per line, and will be happy to load it whatever the content is
[15:48:48] <kali> DarkLinkXXXX: there is an option to import a json array of objects instead
[15:48:50] <DarkLinkXXXX> Thanks.
[15:49:41] <kali> DarkLinkXXXX: that said, do not expect miracles for non-json types like dates and binary
[15:50:08] <DarkLinkXXXX> Yeah... I figured as much.
[15:50:45] <DeveloperDude> Hello everyone! I have a question that is probably very common, but I've found several answers and I'm not sure what is the proper way of doing this.
[15:50:53] <DeveloperDude> The thing is that I have a users collections and an events collection, coming from a relational background I was storing userids inside the events collection, but there are no joins in mongodb.
[15:51:00] <DeveloperDude> So there are 3 different solutions
[15:51:05] <DeveloperDude> 1.- Not using mongodb
[15:51:09] <DeveloperDude> 2.- Mapreduce workaround
[15:51:19] <DeveloperDude> 3.- Rethink my data scheme
[15:51:32] <DeveloperDude> And I don't know which one is better
[15:51:45] <DeveloperDude> Any help? Thank you very much!
[15:51:53] <kali> DeveloperDude: have a look at that: http://blog.mongodb.org/post/65612078649/schema-design-for-social-inboxes-in-mongodb
[15:52:20] <kali> DeveloperDude: do not use mapreduce for user fronting queries. never.
[15:52:40] <DeveloperDude> Thanks! I'm checking it
[15:52:58] <DeveloperDude> I supposed that, it seems dirty, but its mentioned in some blogs and forums
[15:53:22] <kali> DeveloperDude: do not believe anything that comes from the internet
[15:53:40] <DeveloperDude> :-D
[15:55:11] <dawik> good advice
[16:49:20] <stickperson> anyone around?
[17:24:16] <whomp> how can i speed up my mongoimports? for example, i was thinking maybe i could convert the json to bson and import it or something
[22:00:35] <sudormf> Question. If I have 20 tables in a nosql database, and i have one database node, but its getting overloaded, so i add 3 more database nodes. now i have 4 total db nodes. will my db scale automatically to use these 4 nodes, or will i have to do any manual configuration to be able to use them?
[22:04:06] <jumpman> hey, i have a quick best practice question. i'm going for performance in a reasonably large database
[22:04:21] <sudormf> dude no one answers here
[22:04:29] <sudormf> i asked my question 5 mins ago, no one said a peep
[22:04:36] <jumpman> maybe mine will be more interesting ;)
[22:04:57] <sudormf> may be you can suck my cock
[22:06:33] <jumpman> ...anyways, i'm trying to store a 2d plane of 'solar systems' each with 'planets' and each planet with a 'puzzle'. but i want to have ~15,000 solar systems with 5-10 planets each.
[22:06:55] <jumpman> i'm planning on storing the 2d plane in chunks - each with a 'solar system' location to be drawn and a position and id inside
[22:07:38] <jumpman> then the id found in a separate collection of each 'solar system' which would include the id, and planets
[22:07:47] <toothrot> nobody answers because 5 minutes passed?
[22:08:01] <sudormf> yes
[22:08:13] <jumpman> here's where the question comes in: would it be faster to store all level data in one collection with 'puzzle id' so that the 'solar systems' call is smaller
[22:08:30] <jumpman> or better to store the puzzles in the planets and just have larger units in starsystems
[22:10:06] <jumpman> basically i'm trying to decide between one collection containing ~1,000,000 puzzles and one collection containing all of the ~15,000 solar systems with the same data from the first collection
[22:10:25] <sudormf> probably the latter so there's less data to search through
[22:10:56] <jumpman> alright, cool
[22:11:05] <jumpman> i think that might be better, too
[22:11:44] <sudormf> yeah, i mean searching 15k has gotta be faster than searching 1m
[22:12:08] <jumpman> right, that's what i figured. i just didn't know if the amount of data within the entries would have the same impact
[22:14:13] <sudormf> would you rather look through 5 rooms or 1000 matchboxesx
[22:14:16] <sudormf> its the same thing
[22:17:15] <sudormf> anyone wanna answer my question now?
[22:19:27] <tornado_terran> hi everyone
[22:19:35] <sudormf> hi
[22:19:45] <sudormf> toothrot: been 15 mins now
[22:19:46] <tornado_terran> im new here and i have a general question
[22:19:52] <sudormf> k we dont want you
[22:19:54] <sudormf> go away
[22:20:10] <tornado_terran> k im out sry
[22:20:12] <sudormf> im kidding
[22:20:15] <sudormf> whats your q
[22:20:16] <tornado_terran> xd
[22:20:35] <tornado_terran> im trying to create analytic tool based on golang and mongodb
[22:21:25] <tornado_terran> im not familiar with mongodb specifics, what is better to use aggregation framework or to fetch a lot of rows with few fields
[22:21:27] <tornado_terran> like
[22:22:21] <sudormf> ask toothrot
[22:22:22] <tornado_terran> i would like to get avarage/median value. I can use aggregation framework or in my case just fetch all rows from some time bucket with fields like firstActionAt, lastActionAT
[22:23:09] <tornado_terran> ok
[22:23:20] <sudormf> i dont have any idea, i'm new myself
[22:23:32] <sudormf> i asked a q 15 mins ago and no one answered me
[22:24:08] <tornado_terran> im trying to create analytic tool based on golang and mongodb. im not familiar with mongodb specifics, what is better to use aggregation framework or to fetch a lot of rows with few fields. i would like to get avarage/median value. I can use aggregation framework or in my case just fetch all rows from some time bucket with fields like firstActionAt, lastActionAT
[22:24:18] <tornado_terran> fuck
[22:24:24] <sudormf> may be you wanna ask on stackoverflow dude
[22:24:34] <tornado_terran> hmm.. good idea
[22:27:24] <sudormf> this chatroom is the worst piece of shit since adolf hitler
[22:27:25] <sudormf> bye
[22:46:11] <sputnik13_> anyone use pymongo?
[22:46:37] <sputnik13_> I'm trying to add a user to a database remotely using pymongo and it just fails silently
[22:46:51] <sputnik13_> I run db.getUsers() on the database and nothing returns
[22:47:21] <sputnik13_> meanwhile if I add the user through the mongo cli client to the database it succeeds
[22:51:25] <deanclkclk> folks...I have 2 seperate instances of mongodb
[22:51:41] <deanclkclk> how can I do a dump of one and restore it on the other?
[22:52:12] <deanclkclk> I'm just running the mongod service on a headless VMs on both instances
[22:52:15] <deanclkclk> can someone help me?
[23:02:50] <sputnik13_> deanclkclk: http://stackoverflow.com/questions/6697871/transfer-mongodb-to-another-server